A Knowledge Plane for the Internet

David D. Clark

Craig Partridge

J. Christopher Ramming

John T. Wroclawski

M.I.T Lab for Computer Science

BBN Technologies

SRI International

M.I.T Lab for Computer Science

200 Technology Square

10 Moulton St

333 Ravenswood Avenue

200 Technology Square

Cambridge, MA 02139

Cambridge, MA 02138

Menlo Park, CA 94205 USA

Cambridge, MA 02139

ddc@lcs.mit.edu

craig@bbn.com

chrisramming@yahoo.com

jtw@lcs.mit.edu

Abstract:

We propose a new objective for network research: to build a fundamentally different sort of network that can assemble itself given high level instructions, reassemble itself as requirements change, automatically discover when something goes wrong, and automatically fix a detected problem or explain why it cannot do so. We further argue that to achieve this goal, it is not sufficient to improve incrementally on the techniques and algorithms we know today. Instead, we propose a new construct, the Knowledge Plane, a pervasive system within the network that builds and maintains highlevel models of what the network is supposed to do, in order to provide services and advice to other elements of the network. The knowledge plane is novel in its reliance on the tools of AI and cognitive systems. We argue that cognitive techniques, rather than traditional algorithmic approaches, are best suited to meeting the uncertainties and complexity of our objective.



PDF File This paper is available in Adobe PDF format.TOP


A Routing Underlay for Overlay Networks

Akihiro Nakao

Larry Peterson

Andy Bavier

Department of Computer Science

Princeton University

Abstract:

We argue that designing overlay services to independently probe the Internet--with the goal of making informed application-specific routing decisions--is an untenable strategy. Instead, we propose a shared routing underlay that overlay services query. We posit that this underlay must adhere to two high-level principles. First, it must take cost (in terms of network probes) into account. Second, it must be layered so that specialized routing services can be built from a set of basic primitives. These principles lead to an underlay design where lower layers expose large-scale, coarse-grained static information already collected by the network, and upper layers perform more frequent probes over a narrow set of nodes. This paper proposes a set of primitive operations and three library routing services that can be built on top of them, and describes how such libraries could be useful to overlay services.



PDF File This paper is available in Adobe PDF format.TOP


Greening of the Internet

Maruti Gupta

Suresh Singh

Department of Computer Science

Department of Computer Science

Portland State University

Portland State University

Portland, OR 97207

Portland, OR 97207

mgupta@cs.pdx.edu

singh@cs.pdx.edu

Abstract:

In this paper we examine the somewhat controversial subject of energy consumption of networking devices in the Internet, motivated by data collected by the U.S. Department of Commerce. We discuss the impact on network protocols of saving energy by putting network interfaces and other router & switch components to sleep. Using sample packet traces, we first show that it is indeed reasonable to do this and then we discuss the changes that may need to be made to current Internet protocols to support a more aggressive strategy for sleeping. Since this is a position paper, we do not present results but rather suggest interesting directions for core networking research. The impact of saving energy is huge, particularly in the developing world where energy is a precious resource whose scarcity hinders widespread Internet deployment.



PDF File This paper is available in Adobe PDF format.TOP


A Delay-Tolerant Network Architecture for Challenged Internets

Kevin Fall

Intel Research, Berkeley

kfall@intel-research.net

Abstract:

The highly successful architecture and protocols of today's Internet may operate poorly in environments characterized by very long delay paths and frequent network partitions. These problems are exacerbated by end nodes with limited power or memory resources. Often deployed in mobile and extreme environments lacking continuous connectivity, many such networks have their own specialized protocols, and do not utilize IP. To achieve interoperability between them, we propose a network architecture and application interface structured around optionally-reliable asynchronous message forwarding, with limited expectations of end-to-end connectivity and node resources. The architecture operates as an overlay above the transport layers of the networks it interconnects, and provides key services such as in-network data storage and retransmission, interoperable naming, authenticated forwarding and a coarse-grained class of service.



PDF File This paper is available in Adobe PDF format.TOP


Routing Using Potentials: A Dynamic Traffic-Aware Routing Algorithm

Anindya Basu

Alvin Lin

Sharad Ramanathan

Bell Laboratories

MIT

Bell Laboratories

basu@research.bell-labs.com

alvinl@mit.edu

sharadr@physics.bell-labs.com

Abstract:

We present a routing paradigm called PB-routing that utilizes steepest gradient search methods to route data packets. More specifically, the PB-routing paradigm assigns scalar potentials to network elements and forwards packets in the direction of maximum positive force. We show that the family of PB-routing schemes are loop free and that the standard shortest path routing algorithms are a special case of the PB-routing paradigm. We then show how to design a potential function that accounts for traffic conditions at a node. The resulting routing algorithm routes around congested areas while preserving the key desirable properties of IP routing mechanisms including hop-byhop routing, local route computations and statistical multiplexing. Our simulations using the ns simulator indicate that the traffic aware routing algorithm shows significant improvements in end-to-end delay and jitter when compared to standard shortest path routing algorithms. The simulations also indicate that our algorithm does not incur too much control overheads and is fairly stable even when traffic conditions are dynamic.



PDF File This paper is available in Adobe PDF format.TOP


Network Routing with Path Vector Protocols: Theory and Applications

Joao Luis Sobrinho

Instituto de Telecomunicacoes, Instituto Superior Tecnico, Portugal

joao.sobrinho@lx.it.pt

Abstract:

Path vector protocols are currently in the limelight, mainly because the inter-domain routing protocol of the Internet, BGP (Border Gateway Protocol), belongs to this class. In this paper, we cast the operation of path vector protocols into a broad algebraic framework and relate the convergence of the protocol, and the characteristics of the paths to which it converges, with the monotonicity and isotonicity properties of its path compositional operation. Here, monotonicity means that the weight of a path cannot decrease when it is extended, and isotonicity means that the relationship between the weights of any two paths with the same origin is preserved when both are extended to the same node. We show that path vector protocols can be made to converge for every network if and only if the algebra is monotone, and that the resulting paths selected by the nodes are optimal if and only if the algebra is isotone as well.
Many practical conclusions can be drawn from instances of the generic algebra. For performance-oriented routing, typical in intra-domain routing, we conclude that path vector protocols can be made to converge to widest or widest-shortest paths, but that the composite metric of IGRP (Interior Gateway Protocol), for example, does not guarantee convergence to optimal paths. For policy-based routing, typical in inter-domain routing, we formulate existing guidelines as instances of the generic algebra and we propose new ones. We also show how a particular instance of the algebra yields a sufficient condition for signaling correctness of internal BGP.



PDF File This paper is available in Adobe PDF format.TOP


Design Principles of Policy Languages for Path Vector Protocols

Timothy G. Griffin

Aaron D. Jaggard

Vijay Ramachandran

AT&T Labs ­ Research

Dept. of Mathematics

Dept. of Computer Science

Florham Park, NJ, USA

Tulane University

Yale University

New Orleans, LA, USA

New Haven, CT, USA

griffin@research.att.com

adj@math.tulane.edu

vijayr@cs.yale.edu

Abstract:

BGP is unique among IP-routing protocols in that routing is determined using semantically rich routing policies. However, this expressiveness has come with hidden risks. The interaction oflocally defined routing policies can lead to unexpected global routing anomalies, which can be very difficult to identify and correct in the decentralized and competitive Internet environment. These risks increase as the complexity oflocal policies increase, which is precisely the current trend. BGP policy languages have evolved in a rather organic fashion with little effort to avoid policy-interaction problems. We believe that researchers should start to consider how to design policy languages for path-vector protocols that avoid such risks and yet retain other desirable features. We take a few steps in this direction by identifying the important dimensions ofthis design space and characterizing some ofthe inherent design trade-offs. We attempt to do this in a general way that is not constrained by the details ofBGP.



PDF File This paper is available in Adobe PDF format.TOP


Low-Rate TCP-Targeted Denial of Service Attacks (The Shrew vs. the Mice and Elephants)

Aleksandar Kuzmanovic

Edward W. Knightly

ECE/CS Departments

Rice University

Houston, TX 77005, USA

{akuzma,knightly}@rice.edu

Abstract:

Denial of Service attacks are presenting an increasing threat to the global inter-networking infrastructure. While TCP's congestion control algorithm is highly robust to diverse network conditions, its implicit assumption of end-system cooperation results in a wellknown vulnerability to attack by high-rate non-responsive flows. In this paper, we investigate a class of low-rate denial of service attacks which, unlike high-rate attacks, are difficult for routers and counter-DoS mechanisms to detect. Using a combination of analytical modeling, simulations, and Internet experiments, we show that maliciously chosen low-rate DoS traffic patterns that exploit TCP's retransmission time-out mechanism can throttle TCP flows to a small fraction of their ideal rate while eluding detection. More-over, as such attacks exploit protocol homogeneity, we study fundamental limits of the ability of a class of randomized time-out mechanisms to thwart such low-rate DoS attacks.



PDF File This paper is available in Adobe PDF format.TOP


Robustness to Inflated Subscription in Multicast Congestion Control

Sergey Gorinsky

Sugat Jain

Harrick Vin

Yongguang Zhang

Department of Computer Sciences

HRL Laboratories, LLC

University of Texas at Austin

Malibu, California

{gorinsky,sugat,vin}@cs.utexas.edu

ygz@hrl.com

Abstract:

Group subscription is a useful mechanism for multicast congestion control: RLM, RLC, FLID-DL, and WEBRC form a promising line of multi-group protocols where receivers provide no feedback to the sender but control congestion via group membership regulation. Unfortunately, the group subscription mechanism also offers receivers an opportunity to elicit self-beneficial bandwidth allocations. In particular, a misbehaving receiver can ignore guidelines for group subscription and choose an unfairly high subscription level in a multi-group multicast session. This poses a serious threat to fairness of bandwidth allocation. In this paper, we present the first solution for the problem of inflated subscription. Our design guards access to multicast groups with dynamic keys and consists of two independent components: DELTA (Distribution of ELigibility To Access) ­ a novel method for in-band distribution of group keys to receivers that are eligible to access the groups according to the congestion control protocol, and SIGMA (Secure Internet Group Management Architecture) ­ a generic architecture for key-based group access at edge routers.



PDF File This paper is available in Adobe PDF format.TOP


A Framework for Classifying Denial of Service Attacks

Alefiya Hussain

John Heidemann

Christos Papadopoulos

USC/Information Sciences Institute

{hussain,johnh,christos}@isi.edu

Abstract:

Launching a denial of service (DoS) attack is trivial, but detection and response is a painfully slow and often a manual process. Automatic classification of attacks as single- or multi-source can help focus a response, but current packet-header-based approaches are susceptible to spoofing. This paper introduces a framework for classifying DoS attacks based on header content, transient ramp-up behavior and novel techniques such as spectral analysis. Although headers are easily forged, we show that characteristics of attack ramp-up and attack spectrum are more difficult to spoof. To evaluate our framework we monitored access links of a regional ISP detecting 80 live attacks. Header analysis identified the number of attackers in 67 attacks, while the remaining 13 attacks were classified based on ramp-up and spectral analysis. We validate our results through monitoring at a second site, controlled experiments, and simulation. We use experiments and simulation to understand the underlying reasons for the characteristics observed. In addition to helping understand attack dynamics, classification mechanisms such as ours are important for the development of realistic models of DoS traffic, can be packaged as an automated tool to aid in rapid response to attacks, and can also be used to estimate the level of DoS activity on the Internet.



PDF File This paper is available in Adobe PDF format.TOP


Quantifying the Causes of Path Inflation

Neil Spring

Ratul Mahajan

Thomas Anderson

{nspring,ratul,tom}@cs.washington.edu

Computer Science and Engineering

University of Washington

Seattle, WA 98195-2350

Abstract:

Researchers have shown that the Internet exhibits path inflation ­ end-to-end paths can be significantly longer than necessary. We present a trace-driven study of 65 ISPs that characterizes the root causes of path inflation, namely topology and routing policy choices within an ISP, between pairs of ISPs, and across the global Internet. To do so, we develop and validate novel techniques to infer intra-domain and peering policies from end-to-end measurements. We provide the first measured characterization of ISP peering policies. In addition to "early-exit," we observe a significant degree of helpful non-early-exit, load-balancing, and other policies in use between peers. We find that traffic engineering (the explicit addition of policy constraints on top of topology constraints) is widespread in both intra- and inter-domain routing. However, intra-domain traffic engineering has minimal impact on path inflation, while peering policies and inter-domain routing lead to significant inflation. We argue that the underlying cause of inter-domain path inflation is the lack of BGP policy controls to provide convenient engineering of good paths across ISPs.



PDF File This paper is available in Adobe PDF format.TOP


The Impact of Address Allocation and Routing on the Structure and Implementation of Routing Tables

Harsha Narayan

Ramesh Govindan

George Varghese

University of California, San Diego

University of Southern California

University of California, San Diego

hnarayan@cs.ucsd.edu

ramesh@usc.edu

varghese@cs.ucsd.edu

Abstract:

The recent growth in the size of the routing table has led to an interest in quantitatively understanding both the causes (e.g., multihoming) as well as the effects (e.g., impact on router lookup implementations) of such routing table growth. In this paper, we describe a new model called ARAM that defines the structure of routing tables of any given size. Unlike simpler empirical models that work backwards from effects (e.g., current prefix length distributions), ARAM approximately models the causes of table growth (allocation by registries, assignment by ISPs, multihoming and load balancing). We show that ARAM models with high fidelity three abstract measures (prefix distribution, prefix depth, and number of nodes in the tree) of the shape of the prefix tree -- as validated against 20 snapshots of backbone routing tables from 1997 to the present. We then use ARAM for evaluating the scalability of IP lookup schemes, and studying the effects of multihoming and load balancing on their scaling behavior. Our results indicate that algorithmic solutions based on multibit tries will provide more prefixes per chip than TCAMs (as table sizes scale toward a million) unless TCAMs can be engineered to use 8 transistors per cell. By contrast, many of today's SRAM-based TCAMs use 14-16 transistors per cell.



PDF File This paper is available in Adobe PDF format.TOP


Automatically Inferring Patterns of Resource Consumption in Network Traffic

Cristian Estan

Stefan Savage

George Varghese

Computer Science and Engineering Department

University of California San Diego

{cestan,savage,varghese}@cs.ucsd.edu

Abstract:

The Internet service model emphasizes flexibility ­ any node can send any type of traffic at any time. While this design has allowed new applications and usage models to flourish, it also makes the job of network management significantly more challenging. This paper describes a newmethod of traffic characterization that automatically groups traffic into minimal clusters of conspicuous consumption. Rather than providing a static analysis specialized to capture flows, applications, or network-to-network traffic matrices, our approach dynamically produces hybrid traffic definitions that match the underlying usage. For example, rather than report five hundred small flows, or the amount of TCP traffic to port 80, or the "top ten hosts", our method might reveal that a certain percent of traffic was used by TCP connections between AOL clients and a particular group of Web servers. Similarly, our technique can be used to automatically classify new traffic patterns, such as network worms or peer-to-peer applications, without knowing the structure of such traffic a priori. We describe a series of algorithms for constructing these traffic clusters and minimizing their representation. In addition, we describe the design of our prototype system, AutoFocus and our experiences using it to discover the dominant and unusual modes of usage on several different production networks.



PDF File This paper is available in Adobe PDF format.TOP


On Selfish Routing in Internet-Like Environments

Lili Qiu

Yang Richard Yang

Yin Zhang

Scott Shenker

Microsoft Research

Yale University

AT&T Labs-Research

ICSI

liliq@microsoft.com

yry@cs.yale.edu

yzhang@research.att.com

shenker@icir.org

Abstract:

A recent trend in routing research is to avoid inefficiencies in network-level routing by allowing hosts to either choose routes themselves (e.g., source routing) or use overlay routing networks (e.g., Detour or RON). Such approaches result in selfish routing, because routing decisions are no longer based on system-wide criteria but are instead designed to optimize host-based or overlay-based metrics. A series of theoretical results showing that selfish routing can result in suboptimal system behavior have cast doubts on this approach. In this paper, we use a game-theoretic approach to investigate the performance of selfish routing in Internet-like environments. We focus on intra-domain network environments and use realistic topologies and traffic demands in our simulations. We show that in contrast to theoretical worst cases, selfish routing achieves close to optimal average latency in such environments. However, such performance benefit comes at the expense of significantly increased congestion on certain links. Moreover, the adaptive nature of selfish overlays can significantly reduce the effectiveness of traffic engineering by making network traffic less predictable.



PDF File This paper is available in Adobe PDF format.TOP


Forwarding in a Content-Based Network

Antonio Carzaniga

Alexander L. Wolf

Department of Computer Science

Department of Computer Science

University of Colorado

University of Colorado

Boulder, Colorado 80309-0430 USA

Boulder, Colorado 80309-0430 USA

carzanig@cs.colorado.edu

alw@cs.colorado.edu

Abstract:

This paper presents an algorithm for content-based forwarding, an essential function in content-based networking. Unlike in traditional address-based unicast or multicast networks, where messages are given explicit destination addresses, the movement of messages through a content-based network is driven by predicates applied to the content of the messages. Forwarding in such a network amounts to evaluating the predicates stored in a router's forwarding table in order to decide to which neighbor routers the message should be sent. We are interested in finding a forwarding algorithm that can make this decision as quickly as possible in situations where there are numerous, complex predicates and high volumes of messages. We present such an algorithm and give the results of studies evaluating its performance.



PDF File This paper is available in Adobe PDF format.TOP


Peer-to-Peer Information Retrieval Using Self-Organizing Semantic Overlay Networks

Chunqiang Tang

Zhichen Xu

Sandhya Dwarkadas

Dept. of Computer Science

HP Laboratories

Dept. of Computer Science

Univ. of Rochester

1501 Page Mill Rd.

Univ. of Rochester

Rochester, NY 14627-0226

Palo Alto, CA 94304-1126

Rochester, NY 14627-0226

sarrmor@cs.rochester.edu

zhichen@hpl.hp.com

sandhya@cs.rochester.edu

Abstract:

Content-based full-text search is a challenging problem in Peer-to-Peer (P2P) systems. Traditional approaches have either been centralized or use flooding to ensure accuracy of the results returned. In this paper, we present pSearch, a decentralized non-flooding P2P information retrieval system. pSearch distributes document indices through the P2P network based on document semantics generated by Latent Semantic Indexing (LSI). The search cost (in terms of different nodes searched and data transmitted) for a given query is thereby reduced, since the indices of semantically related documents are likely to be co-located in the network. We also describe techniques that help distribute the indices more evenly across the nodes, and further reduce the number of nodes accessed using appropriate index distribution as well as using index samples and recently processed queries to guide the search. Experiments show that pSearch can achieve performance comparable to centralized information retrieval systems by searching only a small number of nodes. For a system with 128,000 nodes and 528,543 documents (from news, magazines, etc.), pSearch searches only 19 nodes and transmits only 95.5KB data during the search, whereas the top 15 documents returned by pSearch and LSI have a 91.7% intersection.



PDF File This paper is available in Adobe PDF format.TOP


Scaling Internet Routers Using Optics

Shang-Tse Chuang

Kyoungsik Yu

David Miller

Mark Horowitz

Olav Solgaard

Nick McKeown

Stanford University

Abstract:

Routers built around a single-stage crossbar and a centralized scheduler do not scale, and (in practice) do not provide the throughput guarantees that network operators need to make efficient use of their expensive long-haul links. In this paper we consider how optics can be used to scale capacityand reduce power in a router. We start with the promising load-balanced switch architecture proposed by C-S. Chang. This approach eliminates the scheduler, is scalable, and guarantees 100% throughput for a broad class of traffic. But several problems need to be solved to make this architecture practical: (1) Packets can be mis-sequenced, (2) Pathological periodic traffic patterns can make throughput arbitrarilysmall, (3) The architecture requires a rapidly configuring switch fabric, and (4) It does not work when linecards are missing or have failed. In this paper we solve each problem in turn, and describe new architectures that include our solutions. We motivate our work bydesigning a 100Tb/s packet-switched router arranged as 640 linecards, each operating at 160Gb/s. We describe two different implementations based on technologyavailable within the next three years.



PDF File This paper is available in Adobe PDF format.TOP


Longest Prefix Matching Using Bloom Filters

Sarang Dharmapurikar

Praveen Krishnamurthy

David E. Taylor

sarang@arl.wustl.edu

praveen@ccrc.wustl.edu

det3@arl.wustl.edu

Washington University in Saint Louis

Saint Louis, MO, USA

Abstract:

We introduce the first algorithm that we are aware of to employ Bloom filters for Longest Prefix Matching (LPM). The algorithm performs parallel queries on Bloom filters, an efficient data structure for membership queries,in order to determine address prefix membership in sets of prefixes sorted by prefix length. We show that use of this algorithm for Internet Protocol (IP) routing lookups results in a search engine providing better performance and scalability than TCAM-based approaches. The key feature of our technique is that the performance,as determined by the number of dependent memory accesses per lookup,can be held constant for longer address lengths or additional unique address prefix lengths in the forwarding table given that memory resources scale linearly with the number of prefixes in the forwarding table. Our approach is equally attractive for Internet Protocol Version 6 (IPv6) which uses 128-bit destination addresses,four times longer than IPv4. We present a basic version of our approach along with optimizations leveraging previous advances in LPM algorithms. We also report results of performance simulations of our system using snapshots of IPv4 BGP tables and extend the results to IPv6. Using less than 2Mb of embedded RAM and a commodity SRAM device,our technique achieves average performance of one hash probe per lookup and a worst case of two hash probes and one array access per lookup.



PDF File This paper is available in Adobe PDF format.TOP


Packet Classification Using Multidimensional Cutting

Sumeet Singh

Florin Baboescu

George Varghese

Jia Wang

UC San Diego

UC San Diego

UC San Diego

AT&T Labs­Research

9500 Gilman Drive

9500 Gilman Drive

9500 Gilman Drive

La Jolla, CA 92093-0114

La Jolla, CA 92093-0114

La Jolla, CA 92093-0114

Florham Park, NJ07932-0971

susingh@cs.ucsd.edu

baboescu@cs.ucsd.edu

varghese@cs.ucsd.edu

jiawang@research.att.com

Abstract:

This paper introduces a classification algorithm called HyperCuts. Like the previously best known algorithm, HiCuts, HyperCuts is based on a decision tree structure. Unlike HiCuts, however, in which each node in the decision tree represents a hyperplane, each node in the HyperCuts decision tree represents a k-dimensional hypercube. Using this extra degree offreedom and a new set ofheuristics to find optimal hypercubes for a given amount of storage, HyperCuts can provide an order ofmagnitude improvement over existing classification algorithms. HyperCuts uses 2 to 10 times less memory than HiCuts optimized for memory, while the worst case search time ofHyperCuts is 50 - 500% better than that ofHiCuts optimized for speed. Compared with another recent scheme, EGT-PC, HyperCuts uses 1.8 - 7 times less memory space while the worst case search time is up to 5 times smaller. More importantly, unlike EGT-PC, HyperCuts can be fully pipelined to provide one classification result every packet arrival time, and also allows fast updates.



PDF File This paper is available in Adobe PDF format.TOP


Quantum Cryptography in Practice

Chip Elliott

Dr. David Pearson

Dr. Gregory Troxel

BBN Technologies

BBN Technologies

BBN Technologies

10 Moulton Street

10 Moulton Street

10 Moulton Street

Cambridge, MA 02138

Cambridge, MA 02138

Cambridge, MA 02138

celliott@bbn.com

dpearson@bbn.com

gtroxel@bbn.com

Abstract:

BBN, Harvard, and Boston University are building the DARPA Quantum Network, the world's first network that delivers end-to-end network security via high-speed Quantum Key Distribution, and testing that Network against sophisticated eavesdropping attacks. The first network link has been up and steadily operational in our laboratory since December 2002. It provides a Virtual Private Network between private enclaves, with user traffic protected by a weak-coherent implementation of quantum cryptography. This prototype is suitable for deployment in metro-size areas via standard telecom (dark) fiber. In this paper, we introduce quantum cryptography, discuss its relation to modern secure networks, and describe its unusual physical layer, its specialized quantum cryptographic protocol suite (quite interesting in its own right), and our extensions to IPsec to integrate it with quantum cryptography.



PDF File This paper is available in Adobe PDF format.TOP


Stratified Round Robin: A Low Complexity Packet Scheduler with Bandwidth Fairness and Bounded Delay

Sriram Ramabhadran

Joseph Pasquale

Department of Computer Science & Engineering

Department of Computer Science & Engineering

University of California, San Diego

University of California, San Diego

9500 Gilman Drive

9500 Gilman Drive

La Jolla, CA 92093-0114

La Jolla, CA 92093-0114

sriram@cs.ucsd.edu

pasquale@cs.ucsd.edu

Abstract:

Fair queuing is a well-studied problem in modern computer networks. However, there remains a gap between scheduling algorithms that have provably good performance, and those that are feasible and practical to implement in highspeed routers. In this paper, we propose a novel packet scheduler called Stratified Round Robin, which has low complexity, and is amenable to a simple hardware implementation. Stratified Robin Robin exhibits good fairness and delay properties that are demonstrated through both analytical results and simulations. In particular, it provides a single packet delay bound that is independent of the number of flows. This property is unique to Stratified Round Robin among all other schedulers of comparable complexity.



PDF File This paper is available in Adobe PDF format.TOP


A Comparison of Hard-state and Soft-state Signaling Protocols

Ping Ji

Zihui Ge

Jim Kurose

Don Towsley

Computer Science Department,

University of Massachusetts at Amherst

{jiping,gezihui,kurose,towsley}@cs.umass.edu

Abstract:

One of the key infrastructure components in all telecommunication networks, ranging from the telephone network, to VC-oriented data networks, to the Internet, is its signaling system. Two broad approaches towards signaling can be identified: so-called hard-state and soft-state approaches. Despite the fundamental importance of signaling, our understanding of these approaches - their pros and cons and the circumstances in which they might best be employed - is mostly anecdotal (and occasionally religious). In this paper, we compare and contrast a variety of signaling approaches ranging from a "pure" soft state, to soft-state approaches augmented with explicit state removal and/or reliable signaling, to a "pure" hard state approach. We develop an analytic model that allows us to quantify state inconsistency in single- and multiple-hop signaling scenarios, and the "cost" (both in terms of signaling overhead, and application-specific costs resulting from state inconsistency) associated with a given signaling approach and its parameters (e.g., state refresh and removal timers). Among the class of soft-state approaches, we find that a soft-state approach coupled with explicit removal substantially improves the degree of state consistency while introducing little additional signaling message overhead. The addition of reliable explicit setup/update/removal allows the soft-state approach to achieve comparable (and sometimes better) consistency than that of the hard-state approach.



PDF File This paper is available in Adobe PDF format.TOP


The Effects of Active Queue Management on Web Performance

Long Le

Jay Aikat

Kevin Jeffay

F. Donelson Smith

Department of Computer Science

University of North Carolina at Chapel Hill

http://www.cs.unc.edu/Research/dirt

Abstract:

We present an empirical study of the effects of active queue management (AQM) on the distribution of response times experienced by a population of web users. Three prominent AQM schemes are considered: the Proportional Integrator (PI) controller, the Random Exponential Marking (REM) controller, and Adaptive Random Early Detection (ARED). The effects of these AQM schemes were studied alone and in combination with Explicit Congestion Notification (ECN). Our major results are:

  1. For offered loads up to 80% of bottleneck link capacity, no AQM scheme provides better response times than simple droptail FIFO queue management.
  2. For loads of 90% of link capacity or greater when ECN is not used, PI results in a modest improvement over drop-tail and the other AQM schemes.
  3. With ECN, both PI and REM provide significant response time improvement at offered loads above 90% of link capacity. More-over, at a load of 90% PI and REM with ECN provide response times competitive to that achieved on an unloaded network.
  4. ARED with recommended parameter settings consistently resulted in the poorest response times which was unimproved by the addition of ECN.

We conclude that without ECN there is little end-user performance gain to be realized by employing the AQM designs studied here. However, with ECN, response times can be significantly improved. In addition it appears likely that provider links may be operated at near saturation levels without significant degradation in user-perceived performance.



PDF File This paper is available in Adobe PDF format.TOP


Persistent Dropping: An Efficient Control of Traffic Aggregates

Hani Jamjoom

Kang G. Shin

University of Michigan

jamjoom,kgshin @eecs.umich.edu

Abstract:

Flash crowd events (FCEs) present a real threat to the stability of routers and end-servers. Such events are characterized by a large and sustained spike in client arrival rates, usually to the point of service failure. Traditional rate-based drop policies, such as Random Early Drop (RED), become ineffective in such situations since clients tend to be persistent, in the sense that they make multiple retransmission attempts before aborting their connection. As it is built into TCP's congestion control, this persistence is very widespread, making it a major stumbling block to providing responsive aggregate traffic controls. This paper focuses on analyzing and building a coherent model of the effects of client persistence on the controllability of aggregate traffic. Based on this model, we propose a new drop strategy called persistent dropping to regulate the arrival of SYN packets and achieves three important goals: (1) it allows routers and end-servers to quickly converge to their control targets without sacrificing fairness, (2) it minimizes the portion of client delay that is attributed to the applied controls, and (3) it is both easily implementable and computationally tractable. Using a real implementation of this controller in the Linux kernel, we demonstrate its efficacy, up to 60% delay reduction for drop probabilities less than 0.5.



PDF File This paper is available in Adobe PDF format.TOP


An Information-Theoretic Approach to Traffic Matrix Estimation

Yin Zhang

Matthew Roughan

Carsten Lund

David Donoho

AT&T Labs-Research

Stanford University

(yzhang,roughan,lund)@research.att.com

donoho@stanford.edu

Abstract:

Traffic matrices are required inputs for many IP network management tasks: for instance, capacity planning, traffic engineering and network reliability analysis. However, it is difficult to measure these matrices directly, and so there has been recent interest in inferring traffic matrices from link measurements and other more easily measured data. Typically, this inference problem is ill-posed, as it involves significantly more unknowns than data. Experience in many scientific and engineering fields has shown that it is essential to approach such ill-posed problems via "regularization". This paper presents a new approach to traffic matrix estimation using a regularization based on "entropy penalization". Our solution chooses the traffic matrix consistent with the measured data that is information-theoretically closest to a model in which source/destination pairs are stochastically independent. We use fast algorithms based on modern convex optimization theory to solve for our traffic matrices. We evaluate the algorithm with real backbone traffic and routing data, and demonstrate that it is fast, accurate, robust, and flexible.



PDF File This paper is available in Adobe PDF format.TOP


Making Intra-Domain Routing Robust to Changing and Uncertain Traffic Demands: Understanding Fundamental Tradeoffs

David Applegate

Edith Cohen

AT&T Labs-Research

AT&T Labs-Research

180 Park Avenue

180 Park Avenue

Florham Park, NJ 07932, USA

Florham Park, NJ 07932, USA

david@research.att.com

edith@research.att.com

Abstract:

Intra-domain traffic engineering can significantly enhance the performance of large IP backbone networks. Two important components of traffic engineering are understanding the traffic demands and configuring the routing protocols. These two components are inter-linked, as it is widely believed that an accurate view of traffic is important for optimizing the configuration of routing protocols and through that, the utilization of the network.
This basic premise, however, never seems to have been quantified ­ How important is accurate knowledge of traffic demands for obtaining good utilization of the network? Since traffic demand values are dynamic and illusive, is it possible to obtain a routing that is "robust" to variations in demands? Armed with enhanced recent algorithmic tools we explore these questions on a diverse collection of ISP networks. We arrive at a surprising conclusion: it is possible to obtain a robust routing that guarantees a nearly optimal utilization with a fairly limited knowledge of the applicable traffic demands.



PDF File This paper is available in Adobe PDF format.TOP


Estimating Flow Distributions from Sampled Flow Statistics

Nick Duffield

Carsten Lund

Mikkel Thorup

AT&T Labs-Research

AT&T Labs-Research

AT&T Labs-Research

180 Park Avenue

180 Park Avenue

180 Park Avenue

Florham Park, NJ 07932, USA

Florham Park, NJ 07932, USA

Florham Park, NJ 07932, USA

duffield@research.att.com

lund@research.att.com

mthorup@research.att.com

Abstract:

Passive traffic measurement increasingly employs sampling at the packet level. Many high-end routers form flow statistics from a sampled substream of packets. Sampling is necessary in order to control the consumption of resources by the measurement operations. However, knowledge of the statistics of flows in the unsampled stream remains useful, for understanding both characteristics of source traffic, and consumption of resources in the network.
This paper provide methods that use flow statistics formed from sampled packet stream to infer the absolute frequencies of lengths of flows in the unsampled stream. A key part of our work is inferring the numbers and lengths of flows of original traffic that evaded sampling altogether. We achieve this through statistical inference, and by exploiting protocol level detail reported in flow records. The method has applications to detection and characterization of network attacks: we show how to estimate, from sampled flow statistics, the number of compromised hosts that are sending attack traffic past the measurement point. We also investigate the impact on our results of different implementations of packet sampling.



PDF File This paper is available in Adobe PDF format.TOP


A High-level Programming Environment for Packet Trace Anonymization and Transformation

Ruoming Pang

Vern Paxson

Department of Computer Science

International Computer Science Institute

Princeton University

rpang@cs.princeton.edu

vern@icir.org

Abstract:

Packet traces of operational Internet traffic are invaluable to network research, but public sharing of such traces is severely limited by the need to first remove all sensitive information. Current trace anonymization technology leaves only the packet headers intact, completely stripping the contents; to our knowledge, there are no publicly available traces of any significant size that contain packet payloads. We describe a new approach to transform and anonymize packet traces. Our tool provides high-level language support for packet transformation, allowing the user to write short policy scripts to express sophisticated trace transformations. The resulting scripts can anonymize both packet headers and payloads, and can perform application-level transformations such as editing HTTP or SMTP headers, replacing the content of Web items with MD5 hashes, or altering filenames or reply codes that match given patterns. We discuss the critical issue of verifying that anonymizations are both correctly applied and correctly specified, and experiences with anonymizing FTP traces from the Lawrence Berkeley National Laboratory for public release.



PDF File This paper is available in Adobe PDF format.TOP


A Measurement-Based Analysis of Multihoming

Aditya Akella

Bruce Maggs

Anees Shaikh

Ramesh Sitaraman

Srinivasan Seshan

Carnegie Mellon University

IBM T.J. Watson Research Center

Univ. of Massachusetts

{aditya,srini+,bmm}@cs.cmu.edu

aashaikh@watson.ibm.com

ramesh@akamai.com

Abstract:

Multihoming has traditionally been employed by stub networks to enhance the reliability of their network connectivity. With the advent of commercial "intelligent route control" products, stubs now leverage multihoming to improve performance. Although multihoming is widely used for reliability and, increasingly for performance, not much is known about the tangible benefits that multihoming can offer, or how these benefits can be fully exploited. In this paper, we aim to quantify the extent to which multihomed networks can leverage performance and reliability benefits from connections to multiple providers. We use data collected from servers belonging to the Akamai content distribution network to evaluate performance benefits from two distinct perspectives of multihoming: high-volume content-providers which transmit large volumes of data to many distributed clients, and enterprises which primarily receive data from the network. In both cases, we find that multihoming can improve performance significantly and that not choosing the right set of providers could result in a performance penalty as high as 40%. We also find evidence of diminishing returns in performance when more than four providers are considered for multihoming. In addition, using a large collection of measurements, we provide an analysis of the reliability benefits of multihoming. Finally, we provide guidelines on how multihomed networks can choose ISPs, and discuss practical strategies of using multiple upstream connections to achieve optimal performance benefits.



PDF File This paper is available in Adobe PDF format.TOP


Towards an Accurate AS-Level Traceroute Tool

Zhuoqing Morley Mao

Jennifer Rexford

Jia Wang

Randy H. Katz

UC Berkeley

AT&T Labs-Research

AT&T Labs-Research

UC Berkeley

zmao@cs.berkeley.edu

jrex@research.att.com

jiawang@research.att.com

randy@cs.berkeley.edu

Abstract:

Traceroute is widely used to detect routing problems, characterize end-to-end paths, and discover the Internet topology. Providing an accurate list of the Autonomous Systems (ASes) along the forwarding path would make traceroute even more valuable to researchers and network operators. However, conventional approaches to mapping traceroute hops to AS numbers are not accurate enough. Address registries are often incomplete and out-of-date. BGP routing tables provide a better IP-to-AS mapping, though this approach has significant limitations as well. Based on our extensive measurements, about 10% of the traceroute paths have one or more hops that do not map to a unique AS number, and around 15% of the traceroute AS paths have an AS loop. In addition, some traceroute AS paths have extra or missing AS hops due to Internet eXchange Points, sibling ASes managed by the same institution, and ASes that do not advertise routes to their infrastructure. Using the BGP tables as a starting point, we propose techniques for improving the IP-to-AS mapping as an important step toward an AS-level traceroute tool. Our algorithms draw on analysis of traceroute probes, reverse DNS lookups, BGP routing tables, and BGP update messages collected from multiple locations. We also discuss how the improved IP-to-AS mapping allows us to home in on cases where the BGP and traceroute AS paths differ for legitimate reasons.



PDF File This paper is available in Adobe PDF format.TOP


The Impact of DHT Routing Geometry on Resilience and Proximity

R. Gummadi

S. Gribble

S. Ratnasamy

S. Shenker

I. Stoica

Abstract:

The various proposed DHT routing algorithms embody several different underlying routing geometries. These geometries include hypercubes, rings, tree-like structures, and butterfly networks. In this paper we focus on how these basic geometric approaches affect the resilience and proximity properties of DHTs. One factor that distinguishes these geometries is the degree of flexibility they provide in the selection of neighbors and routes. Flexibility is an important factor in achieving good static resilience and effective proximity neighbor and route selection. Our basic finding is that, despite our initial preference for more complexgeometries, the ring geometry allows the greatest flexibility, and hence achieves the best resilience and proximity performance.



PDF File This paper is available in Adobe PDF format.TOP


Graph-Theoretic Analysis of Structured Peer-to-Peer Systems: Routing Distances and Fault Resilience

Dmitri Loguinov

Anuj Kumar

Vivek Rai

Sai Ganesh

Department of Computer Science

Texas A&M University

College Station, TX 77843

{dmitri,anujk,vivekr,ssai}@cs.tamu.edu

Abstract:

This paper examines graph-theoretic properties of existing peer-to-peer architectures and proposes a new infrastructure based on optimal-diameter de Bruijn graphs. Since generalized de Bruijn graphs possess very short average routing distances and high resilience to node failure, they are well suited for structured peer-to-peer networks. Using the example of Chord, CAN, and de Bruijn, we first study routing performance, graph expansion, and clustering properties of each graph. We then examine bisection width, path overlap, and several other properties that affect routing and resilience of peer-to-peer networks. Having confirmed that de Bruijn graphs offer the best diameter and highest connectivity among the existing peer-to-peer structures, we offer a very simple incremental building process that preserves optimal properties of de Bruijn graphs under uniform user joins/departures. We call the combined peer-to-peer architecture ODRI ­ Optimal Diameter Routing Infrastructure.



PDF File This paper is available in Adobe PDF format.TOP


Making Gnutella-like P2P Systems Scalable

Yatin Chawathe

Sylvia Ratnasamy

Lee Breslau

Nick Lanham

Scott Shenker

AT&T Labs­Research

Intel Research

AT&T Labs­Research

UC Berkeley

ICSI

yatin@research.att.com

sylvia@intel-research.net

breslau@research.att.com

nickl@cs.berkeley.edu

shenker@icsi.berkeley.edu

Abstract:

Napster pioneered the idea of peer-to-peer file sharing, and supported it with a centralized file search facility. Subsequent P2P systems like Gnutella adopted decentralized search algorithms. However, Gnutella's notoriously poor scaling led some to propose distributed hash table solutions to the wide-area file search problem. Contrary to that trend, we advocate retaining Gnutella's simplicity while proposing new mechanisms that greatly improve its scalability. Building upon prior research [1, 12, 22], we propose several modifications to Gnutella's design that dynamically adapt the overlay topology and the search algorithms in order to accommodate the natural heterogeneity present in most peer-to-peer systems. We test our design through simulations and the results show three to five orders of magnitude improvement in total system capacity. We also report on a prototype implementation and its deployment on a testbed.



PDF File This paper is available in Adobe PDF format.TOP




Design of a Robust Active Queue Management Algorithm Based on Feedback Compensation Zhang Heying Liu Baohong Dou Wenhua

School of Computer Science

Institute of Automation

School of Computer Science

National University of Defense

National University of Defense

National University of Defense

Technology

Technology

Technology

Changsha 410073,China

Changsha 410073,China

Changsha 410073,China

heyz@sina.com

liu_baohong@hotmail.com

huawd@public.cs.hn.cn

Abstract:

Active Queue Management (AQM) is a very active research area u in networking. The main objective of an AQM mechanism is to b provide low delay and low loss service in best-effort service u networks. In this paper we propose a new AQM algorithm based c on the feedback compensation technique in control theory. The t algorithm is called Proportional Integral based series q compensation, and Position feedback compensation (PIP). By m choosing the appropriate feedback compensation element and its M parameters, the properties of the corrected system become n dependent, to a great extent, on the series and feedback l compensatory elements. Thus, PIP can eliminate the error incurred s by the inaccuracy in the linear system model as well as eliminate A the sensitivity to the changes in system parameters like load level, w propagation delay, etc. The performance of PIP is compared to PI w using ns simulations. From the experiments and analysis we t conclude that PIP is more responsive to and robust under time- b varying network conditions than PI. o



PDF File This paper is available in Adobe PDF format.TOP