V. Arun

Professor
Computer Science
UMass Amherst


CICS #236,
140 Governors Dr,
Amherst, MA 01003
arun@cs.umass.edu
(413) 545-3651
  • msocket: System support for mobile, multipath, and middlebox-agnostic applications
    Yadav, Aditya, Venkataramani, Arun,
    IEEE 24th International Conference on Network Protocols (ICNP), 2016,
    Extended technical report.
    [+]Show Abstract
    Despite the explosive growth of mobile devices and applications in recent years, today's Internet provides little intrinsic support for seamless mobility. Prior solutions to addressing this problem either handle only a subset of endpoint mobility scenarios or require nontrivial changes to legacy infrastructure. In this paper, we present the design and implementation of msocket, a system that allows communicating endpoints to move across network locations arbitrarily while maintaining disruption-tolerant connectivity without any ...
    [+]Show Bib
    @inproceedings{msocket,
    author = {Yadav, Aditya and Venkataramani, Arun},
    booktitle = {IEEE 24th International Conference on Network Protocols
    (ICNP)},
    link = {http://www.cs.umass.edu/~arun/papers/msocket.pdf},
    organization = {IEEE},
    pages = {1--10},
    title = {msocket: System support for mobile, multipath, and middlebox-agnostic applications},
    year = {2016}
    }


  • Identifying and Addressing Reachability and Policy Attacks in "Secure" BGP
    Yang Song, Arun Venkataramani, Lixin Gao,
    IEEE/ACM Trans. Netw., Volume:24 Pages: 2969--2982, 2016
    [+]Show Abstract
    BGP is known to have many security vulnerabilities due to the very nature of its underlying assumptions of trust among independently operated networks. Most prior efforts have focused on attacks that can be addressed using traditional cryptographic techniques to ensure authentication or integrity, e.g., BGPSec and related works. Although augmenting BGP with authentication and integrity mechanisms is critical, they are, by design, far from sufficient to prevent attacks based on manipulating the complex BGP protocol itself. In this paper, we identify two serious attacks on two of the most fundamental goals of BGP-to ensure reachability and to enable ASes to pick routes available to them according to their routing policies-even in the presence of BGPSec-like mechanisms. Our key contributions are to (1) formalize a series of critical security properties, (2) experimentally validate using commodity router implementations that BGP fails to achieve those properties, (3) quantify the extent of these vulnerabilities in the Internet's AS topology, and (4) propose simple modifications to provably ensure that those properties are satisfied. Our experiments show that, using our attacks, a single malicious AS can cause thousands of other ASes to become disconnected from thousands of other ASes for arbitrarily long, while our suggested modifications almost completely eliminate such attacks.
    [+]Show Bib
    @article{DBLP:journals/ton/SongVG16,
    author = {Yang Song and Arun Venkataramani and Lixin Gao},
    bibsource = {dblp computer science bibliography, http://dblp.org},
    biburl = {http://dblp.uni-trier.de/rec/bib/journals/ton/SongVG16},
    doi = {10.1109/TNET.2015.2503642},
    journal = {{IEEE/ACM} Trans. Netw.},
    link = {http://dx.doi.org/10.1109/TNET.2015.2503642},
    number = {5},
    pages = {2969--2982},
    timestamp = {Tue, 25 Oct 2016 15:54:01 +0200},
    title = {Identifying and Addressing Reachability and Policy Attacks in "Secure" BGP},
    volume = {24},
    year = {2016}
    }


  • Measurement and modeling of user transitioning among networks
    Yang, Sookhyun, Kurose, Jim, Heimlicher, Simon, Venkataramani, Arun,
    IEEE INFOCOM, 2015,
    Extended technical report.
    [+]Show Abstract
    Physical human mobility has played an important role in the design and operation of mobile networks. Physical mobility, however, differs from user identity (name) mobility in both traditional mobility management protocols such as Mobile-IP and in new architectures, such as XIA and MobilityFirst, that support identity mobility and location independence as first class objects. A multi-homed stationary user or a stationary user shifting among multiple devices attached to different networks will persistently keep his/her identity but will change ...
    [+]Show Bib
    @inproceedings{yang2015measurement,
    author = {Yang, Sookhyun and Kurose, Jim and Heimlicher, Simon and Venkataramani, Arun},
    booktitle = {IEEE INFOCOM},
    organization = {IEEE},
    pages = {828--836},
    title = {Measurement and modeling of user transitioning among networks},
    year = {2015}
    }


  • Towards a quantitative comparison of location-independent network architectures
    Gao, Zhaoyu, Venkataramani, Arun, Kurose, James F, Heimlicher, Simon,
    ACM SIGCOMM, 2014
    [+]Show Abstract
    This paper presents a quantitative methodology and results comparing different approaches for {\ it location-independent} communication. Our approach is empirical and is based on real Internet topologies, routing tables from real routers, and a measured workload of the mobility of devices and content across network addresses today. We measure the extent of network mobility exhibited by mobile devices with a home-brewn Android app deployed on hundreds of smartphones, and measure the network mobility of Internet ...
    [+]Show Bib
    @inproceedings{gao2014towards,
    author = {Gao, Zhaoyu and Venkataramani, Arun and Kurose, James F and Heimlicher, Simon},
    booktitle = {ACM SIGCOMM},
    link = {https://www.cics.umass.edu/~arun/papers/Loc_Ind_Comparison.pdf},
    number = {4},
    pages = {259--270},
    publisher = {ACM},
    title = {Towards a quantitative comparison of location-independent network architectures},
    volume = {44},
    year = {2014}
    }


  • A global name service for a highly mobile internetwork
    Sharma, Abhigyan, Tie, Xiaozheng, Uppal, Hardeep, Venkataramani, Arun, Westbrook, David, Yadav, Aditya,
    ACM SIGCOMM, 2014,
    Extended technical report.
    [+]Show Abstract
    Mobile devices dominate the Internet today, however the Internet rooted in its tethered origins continues to provide poor infrastructure support for mobility. Our position is that in order to address this problem, a key challenge that must be addressed is the design of a massively scalable global name service that rapidly resolves identities to network locations under high mobility. Our primary contribution is the design, implementation, and evaluation of auspice, a next-generation global name service that addresses this ...
    [+]Show Bib
    @inproceedings{auspice,
    author = {Sharma, Abhigyan and Tie, Xiaozheng and Uppal, Hardeep and Venkataramani, Arun and Westbrook, David and Yadav, Aditya},
    booktitle = {ACM SIGCOMM},
    link = {http://www.cs.umass.edu/~arun/papers/auspice.pdf},
    number = {4},
    organization = {ACM},
    pages = {247--258},
    title = {A global name service for a highly mobile internetwork},
    volume = {44},
    year = {2014}
    }


  • MobilityFirst: a mobility-centric and trustworthy internet architecture
    Arun Venkataramani, James F. Kurose, Dipankar Raychaudhuri, Kiran Nagaraja, Morley Mao, Suman Banerjee,
    Computer Communication Review, Volume:44 Pages: 74--80, 2014
    [+]Show Abstract
    MobilityFirst is a future Internet architecture with mobility and trustworthiness as central design goals. Mobility means that all endpoints -- devices, services, content, and networks -- should be able to frequently change network attachment points in a seamless manner. Trustworthiness means that the network must be resilient to the presence of a small number of malicious endpoints or network routers. MobilityFirst enhances mobility by cleanly separating names or identifiers from addresses or network locations, and enhances security by representing both in an intrinsically verifiable manner, relying upon a massively scalable, distributed, global name service to bind names and addresses, and to facilitate services including device-to-service, multicast, anycast, and context-aware communication, content retrieval, and more. A key insight emerging from our experience is that a logically centralized global name service can significantly enhance mobility and security and transform network-layer functionality. Recognizing and validating this insight is the key contribution of the MobilityFirst architectural effort.
    [+]Show Bib
    @article{DBLP:journals/ccr/VenkataramaniKRNMB14,
    author = {Arun Venkataramani and James F. Kurose and Dipankar Raychaudhuri and Kiran Nagaraja and Morley Mao and Suman Banerjee},
    bibsource = {dblp computer science bibliography, http://dblp.org},
    biburl = {http://dblp.uni-trier.de/rec/bib/journals/ccr/VenkataramaniKRNMB14},
    doi = {10.1145/2656877.2656888},
    journal = {Computer Communication Review},
    link = {http://doi.acm.org/10.1145/2656877.2656888},
    number = {3},
    pages = {74--80},
    timestamp = {Thu, 04 Sep 2014 13:19:51 +0200},
    title = {MobilityFirst: a mobility-centric and trustworthy internet architecture},
    volume = {44},
    year = {2014}
    }


  • Pros & cons of model-based bandwidth control for client-assisted content delivery
    Abhigyan Sharma, Arun Venkataramani, Antonio A. Rocha,
    Sixth International Conference on Communication Systems and Networks, COMSNETS 2014, Bangalore, India, January 6-10, 2014
    [+]Show Abstract
    A key challenge in client-assisted content delivery is determining how to allocate limited server bandwidth across a large number of files being concurrently served so as to optimize global performance and cost objectives. In this paper, we present a comprehensive experimental evaluation of strategies to control server bandwidth allocation. As part of this effort, we introduce a new model-based control approach that relies on an accurate yet concise “cheat sheet” based on a priori offline measurement to predict swarm performance as a function of the server bandwidth and other swarm parameters. Our evaluation using a prototype system, SwarmServer, instantiating static, dynamic, and model-based controllers shows that static and dynamic controllers can both be suboptimal due to different reasons. In comparison, a model-based approach consistently outperforms both static and dynamic approaches provided it has access to detailed measurements in the regime of interest. Nevertheless, the broad applicability of a model-based approach may be limited in practice because of the overhead of developing and maintaining a comprehensive measurement-based model of swarm performance in each regime of interest.
    [+]Show Bib
    @inproceedings{DBLP:conf/comsnets/SharmaVR14,
    author = {Abhigyan Sharma and Arun Venkataramani and Antonio A. Rocha},
    bibsource = {dblp computer science bibliography, http://dblp.org},
    biburl = {http://dblp.uni-trier.de/rec/bib/conf/comsnets/SharmaVR14},
    booktitle = {Sixth International Conference on Communication Systems and Networks, {COMSNETS} 2014, Bangalore, India, January 6-10, 2014},
    crossref = {DBLP:conf/comsnets/2014},
    doi = {10.1109/COMSNETS.2014.6734893},
    link = {http://dx.doi.org/10.1109/COMSNETS.2014.6734893},
    pages = {1--8},
    timestamp = {Wed, 12 Feb 2014 15:57:50 +0100},
    title = {Pros \& cons of model-based bandwidth control for client-assisted content delivery},
    year = {2014}
    }


  • VMShadow: optimizing the performance of latency-sensitive virtual desktops in distributed clouds
    Tian Guo, Vijay Gopalakrishnan, K. K. Ramakrishnan, Prashant J. Shenoy, Arun Venkataramani, Seungjoon Lee,
    Multimedia Systems Conference 2014, MMSys '14, Singapore, March 19-21, 2014
    [+]Show Abstract
    Distributed clouds offer a choice of data center locations to application providers to host their applications. In this paper we consider distributed clouds that host virtual desktops(VDs) which are then accessed by their users through remote desktop protocols. VDs have different sensitivities to latency, primarily determined by the types of applications running (games or video players are more sensitive to latency) and the end users' locations. We design VMShadow, a system to automatically optimize the location and performance of latency-sensitive VDs in the cloud. VMShadow performs black-box fingerprinting of a VM's network traffic to infer its latency-sensitivity and employs a greedy heuristic based algorithm to move highly latency-sensitive VMs to cloud sites that are closer to their end users. VMShadow employs WAN-based live migration and a new network connection migration protocol to ensure that the VM migration and subsequent changes to the VM's network address are transparent to end-users. We implement a prototype of VMShadow in a nested hypervisor and demonstrate its effectiveness for optimizing the performance of VM-based desktops in the cloud. Our experiments on a private and the public EC2 cloud show that VMShadow is able to discriminate between latency-sensitive and insensitive desktop applications and judiciously move only those VMs that will benefit the most. For desktop VMs with video activity, VMShadow improves VNC's refresh rate by 90%. Further our connection migration proxy, which utilizes dynamic rewriting of packet headers, imposes a rewriting overhead of only 13μs per packet. Trans-continental VM migrations take about 4 minutes.
    [+]Show Bib
    @inproceedings{DBLP:conf/mmsys/GuoGRSVL14,
    author = {Tian Guo and Vijay Gopalakrishnan and K. K. Ramakrishnan and Prashant J. Shenoy and Arun Venkataramani and Seungjoon Lee},
    bibsource = {dblp computer science bibliography, http://dblp.org},
    biburl = {http://dblp.uni-trier.de/rec/bib/conf/mmsys/GuoGRSVL14},
    booktitle = {Multimedia Systems Conference 2014, MMSys '14, Singapore, March 19-21, 2014},
    crossref = {DBLP:conf/mmsys/2014},
    doi = {10.1145/2557642.2557646},
    link = {http://doi.acm.org/10.1145/2557642.2557646},
    pages = {103--114},
    timestamp = {Thu, 27 Mar 2014 10:04:38 +0100},
    title = {VMShadow: optimizing the performance of latency-sensitive virtual desktops in distributed clouds},
    year = {2014}
    }


  • CDN Pricing and Investment Strategies under Competition
    Yang Song, Lixin Gao, Arun Venkataramani
    Journal : Smart Data Pricing, Pages: 289-320,
    [+]Show Abstract
    A content delivery network (CDN) is a large distributed system that caches content at multiple locations on the Internet. This chapter focuses on CDN pricing and introduces a game-theoretic model to analyze the price competition among CDNs. It describes sufficient conditions for price wars in a two-CDN pricing game. The chapter also focuses on CDNs' investment strategies and formulates a dynamic CDN game where the market structure, for example, the number of CDNs and their corresponding coverage, can be changed. It first analyzes market changes caused by CDN federation and provide the conditions for small CDNs to benefit from a CDN federation. The chapter provides content producers' CDN selection problem.

  • Distributing content simplifies ISP traffic engineering
    Sharma, Abhigyan, Venkataramani, Arun, Sitaraman, Ramesh K,
    ACM SIGMETRICS Performance Evaluation Review, 2013,
    Extended technical report with full proofs and additional experiments.
    [+]Show Abstract
    Several major Internet service providers today also offer content distribution services. The emergence of such" network-CDNs"(NCDNs) is driven both by market forces as well as the cost of carrying ever-increasing volumes of traffic across their backbones. An NCDN has the flexibility to determine both where content is placed and how traffic is routed within the network. However NCDNs today continue to treat traffic engineering independently from content placement and request redirection decisions. In this paper, we ...
    [+]Show Bib
    @inproceedings{sharma2013distributing,
    author = {Sharma, Abhigyan and Venkataramani, Arun and Sitaraman, Ramesh K},
    booktitle = {ACM SIGMETRICS Performance Evaluation Review},
    link = {http://www.cs.umass.edu/~arun/papers/NCDN.pdf},
    number = {1},
    organization = {ACM},
    pages = {229--242},
    title = {Distributing content simplifies ISP traffic engineering},
    volume = {41},
    year = {2013}
    }


  • Content Availability and Bundling in Swarming Systems
    Daniel S. Menasch\'e, Antonio Augusto de Aragão Rocha, Bin Li, Donald F. Towsley, Arun Venkataramani,
    IEEE/ACM Trans. Netw., Volume:21 Pages: 580--593, 2013
    [+]Show Abstract
    BitTorrent, the immensely popular file swarming system, suffers a fundamental problem: content unavailability. Although swarming scales well to tolerate flash crowds for popular content, it is less useful for unpopular content as peers arriving after the initial rush find it unavailable. In this paper, we present a model to quantify content availability in swarming systems. We use the model to analyze the availability and the performance implications of bundling, a strategy commonly adopted by many BitTorrent publishers today. We find that even a limited amount of bundling exponentially reduces content unavailability. For swarms with highly unavailable publishers, the availability gain of bundling can result in a net decrease in average download time. We empirically confirm the model's conclusions through experiments on PlanetLab using the Mainline BitTorrent client.
    [+]Show Bib
    @article{DBLP:journals/ton/MenascheRLTV13,
    author = {Daniel S. Menasch\'e and Antonio Augusto de Arag\~ao Rocha and Bin Li and Donald F. Towsley and Arun Venkataramani},
    bibsource = {dblp computer science bibliography, http://dblp.org},
    biburl = {http://dblp.uni-trier.de/rec/bib/journals/ton/MenascheRLTV13},
    doi = {10.1109/TNET.2012.2212205},
    journal = {{IEEE/ACM} Trans. Netw.},
    link = {http://dx.doi.org/10.1109/TNET.2012.2212205},
    number = {2},
    pages = {580--593},
    timestamp = {Tue, 25 Jun 2013 19:55:48 +0200},
    title = {Content Availability and Bundling in Swarming Systems},
    volume = {21},
    year = {2013}
    }


  • VMShadow: optimizing the performance of virtual desktops in distributed clouds
    Tian Guo, Vijay Gopalakrishnan, K. K. Ramakrishnan, Prashant J. Shenoy, Arun Venkataramani, Seungjoon Lee,
    ACM Symposium on Cloud Computing, SOCC '13, Santa Clara, CA, USA, October 1-3, 2013
    [+]Show Abstract
    We present VMShadow, a system that automatically optimizes the location and performance of applications based on their dynamic workloads. We prototype VMShadow and demonstrate its efficacy using VM-based desktops in the cloud as an example application. Our experiments on a private cloud as well as the EC2 cloud, using a nested hypervisor, show that VMShadow is able to discriminate between location-sensitive and location-insensitive desktop VMs and judiciously moves only those that will benefit the most from the migration. For example, VMShadow performs transcontinental VM migrations in ~ 4 mins and can improve VNC's video refresh rate by up to 90%
    [+]Show Bib
    @inproceedings{DBLP:conf/cloud/GuoGRSVL13,
    author = {Tian Guo and Vijay Gopalakrishnan and K. K. Ramakrishnan and Prashant J. Shenoy and Arun Venkataramani and Seungjoon Lee},
    bibsource = {dblp computer science bibliography, http://dblp.org},
    biburl = {http://dblp.uni-trier.de/rec/bib/conf/cloud/GuoGRSVL13},
    booktitle = {{ACM} Symposium on Cloud Computing, {SOCC} '13, Santa Clara, CA, USA, October 1-3, 2013},
    crossref = {DBLP:conf/cloud/2013},
    doi = {10.1145/2523616.2525950},
    link = {http://doi.acm.org/10.1145/2523616.2525950},
    pages = {42:1--42:2},
    timestamp = {Wed, 29 Mar 2017 16:45:26 +0200},
    title = {VMShadow: optimizing the performance of virtual desktops in distributed clouds},
    year = {2013}
    }


  • Design requirements of a global name service for a mobility-centric, trustworthy internetwork
    Arun Venkataramani, Abhigyan Sharma, Xiaozheng Tie, Hardeep Uppal, David Westbrook, Jim Kurose, Dipankar Raychaudhuri,
    Fifth International Conference on Communication Systems and Networks, COMSNETS 2013, Bangalore, India, January 7-10, 2013
    [+]Show Abstract
    The Internet's tremendous success as well as our maturing realization of its architectural shortcomings have attracted significant research attention towards clean-slate re-designs in recent times. A number of these shortcomings can be traced back to naming. The current Internet uses IP addresses to conflate identity and network location, which results in poor support for mobility and multihoming; vulnerability to hijacking and spoofing of addresses, etc. The Internet's name resolution infrastructure deeply embeds in its design the assumption of mostly stationary hosts and poorly satisfies the performance, security, and functionality demanded by modern mobile services. As a step towards addressing these shortcomings, we present the design of a global name service that forms a central component of the MobilityFirst, a clean-slate Internet architecture with mobility and trustworthiness as principal design goals. MobilityFirst relies on the global name service to cleanly separate identity from network location and to resolve identifiers to locations in a secure manner. More importantly, MobilityFirst capitalizes on the role of the name resolution infrastructure as a logically central, first point of contact to significantly enhance a number of network-layer functions such as supporting host and network mobility, multi-homed traffic engineering, content retrieval, multicast, and next-generation context-aware services. This paper identifies key challenges that must be addressed to realize such a vision and outlines the design of a distributed global name service that can resolve identifiers to dynamic attributes in a fast, consistent, and cost-effective manner at Internet scales.
    [+]Show Bib
    @inproceedings{DBLP:conf/comsnets/VenkataramaniSTUWKR13,
    author = {Arun Venkataramani and Abhigyan Sharma and Xiaozheng Tie and Hardeep Uppal and David Westbrook and Jim Kurose and Dipankar Raychaudhuri},
    bibsource = {dblp computer science bibliography, http://dblp.org},
    biburl = {http://dblp.uni-trier.de/rec/bib/conf/comsnets/VenkataramaniSTUWKR13},
    booktitle = {Fifth International Conference on Communication Systems and Networks, {COMSNETS} 2013, Bangalore, India, January 7-10, 2013},
    crossref = {DBLP:conf/comsnets/2013},
    doi = {10.1109/COMSNETS.2013.6465579},
    link = {http://dx.doi.org/10.1109/COMSNETS.2013.6465579},
    pages = {1--9},
    timestamp = {Mon, 25 Feb 2013 15:54:43 +0100},
    title = {Design requirements of a global name service for a mobility-centric, trustworthy internetwork},
    year = {2013}
    }


  • Identifying and Addressing Protocol Manipulation Attacks in "Secure" BGP
    Yang Song, Arun Venkataramani, Lixin Gao,
    IEEE 33rd International Conference on Distributed Computing Systems, ICDCS 2013, 8-11 July, 2013, Philadelphia, Pennsylvania, USA
    [+]Show Abstract
    Over more than a decade, researchers have studied a number of control and data plane attacks on BGP, the Internet's interdomain routing protocol, in the presence of malicious ASes. These prior efforts have largely focused on attacks that can be addressed using traditional cryptographic mechanisms to ensure authentication or integrity (e.g., S-BGP). Although augmenting BGP with authentication and integrity mechanisms is critical, it is far from sufficient to prevent attacks based on manipulating the complex BGP protocol itself. In this paper, we identify two serious protocol manipulation attacks that undermine the two most fundamental goals of the BGP control plane -- to ensure reachability and enable ASes to pick routes according to their routing policies -- despite the presence of S-BGP-like mechanisms. Our key contributions are to (1) formalize two critical security properties, (2) experimentally validate using commodity router implementations that BGP fails to achieve them, (3) quantify the extent of the resulting vulnerabilities in the Internet's AS topology, and (4) design and implement simple modifications to provably ensure that those properties are satisfied. Our experiments show that, a single malicious AS can cause thousands of other ASes to become disconnected from thousands of other ASes for arbitrarily long, while our proposed modifications almost completely eliminates such attacks.
    [+]Show Bib
    @inproceedings{DBLP:conf/icdcs/SongVG13,
    author = {Yang Song and Arun Venkataramani and Lixin Gao},
    bibsource = {dblp computer science bibliography, http://dblp.org},
    biburl = {http://dblp.uni-trier.de/rec/bib/conf/icdcs/SongVG13},
    booktitle = {{IEEE} 33rd International Conference on Distributed Computing Systems, {ICDCS} 2013, 8-11 July, 2013, Philadelphia, Pennsylvania, {USA}},
    crossref = {DBLP:conf/icdcs/2013},
    doi = {10.1109/ICDCS.2013.32},
    link = {http://dx.doi.org/10.1109/ICDCS.2013.32},
    pages = {550--559},
    timestamp = {Thu, 11 Dec 2014 17:34:06 +0100},
    title = {Identifying and Addressing Protocol Manipulation Attacks in "Secure" BGP},
    year = {2013}
    }


  • On the CDN pricing game
    Yang Song, Arun Venkataramani, Lixin Gao,
    2013 Proceedings IEEE INFOCOM Workshops, Turin, Italy, April 14-19, 2013
    [+]Show Abstract
    Content Delivery Networks (CDNs) serve a large fraction of Internet traffic today improving user-perceived response time and availability of content. With tens of CDNs competing for content producers, it is important to understand the game played by these CDNs and whether the game is sustainable in the long term. In this paper, we formulate a game-theoretic model to analyze price competition among CDNs. Under this model, we propose an optimal strategy employed by two-CDN games. The strategy is incentive-compatible since any CDN that deviates from the strategy ends up with a lower utility. The strategy is also efficient since it produces a total utility that is at least two thirds of the social optimal utility. We formally derive the sufficient conditions for such a strategy to exist, and empirically show that there exists an optimal strategy for the games with more than two CDNs.
    [+]Show Bib
    @inproceedings{DBLP:conf/infocom/SongVG13a,
    author = {Yang Song and Arun Venkataramani and Lixin Gao},
    bibsource = {dblp computer science bibliography, http://dblp.org},
    biburl = {http://dblp.uni-trier.de/rec/bib/conf/infocom/SongVG13a},
    booktitle = {2013 Proceedings {IEEE} {INFOCOM} Workshops, Turin, Italy, April 14-19, 2013},
    crossref = {DBLP:conf/infocom/2013w},
    doi = {10.1109/INFCOMW.2013.6562871},
    link = {http://dx.doi.org/10.1109/INFCOMW.2013.6562871},
    pages = {339--344},
    timestamp = {Mon, 26 Jan 2015 16:54:52 +0100},
    title = {On the CDN pricing game},
    year = {2013}
    }


  • MobilityFirst: a robust and trustworthy mobility-centric architecture for the future internet
    Dipankar Raychaudhuri, Kiran Nagaraja, Arun Venkataramani,
    Mobile Computing and Communications Review, Volume:16 Pages: 2--13, 2012
    [+]Show Abstract
    This paper presents an overview of the MobilityFirst network architecture, currently under development as part of the US National Science Foundation's Future Internet Architecture (FIA) program. The proposed architecture is intended to directly address the challenges of wireless access and mobility at scale, while also providing new services needed for emerging mobile Internet application scenarios. After briefly outlining the original design goals of the project, we provide a discussion of the main architectural concepts behind the network design, identifying key features such as separation of names from addresses, public-key based globally unique identifiers (GUIDs) for named objects, global name resolution service (GNRS) for dynamic binding of names to addresses, storage-aware routing and late binding, content- and context-aware services, optional in-network compute layer, and so on. This is followed by a brief description of the MobilityFirst protocol stack as a whole, along with an explanation of how the protocol works at end-user devices and inside network routers. Example of specific advanced services supported by the protocol stack, including multi-homing, mobility with disconnection, and content retrieval/caching are given for illustration. Further design details of two key protocol components, the GNRS name resolution service and the GSTAR routing protocol, are also described along with sample results from evaluation. In conclusion, a brief description of an ongoing multi-site experimental proof-of-concept deployment of the MobilityFirst protocol stack on the GENI testbed is provided.
    [+]Show Bib
    @article{DBLP:journals/sigmobile/RaychaudhuriNV12,
    author = {Dipankar Raychaudhuri and Kiran Nagaraja and Arun Venkataramani},
    bibsource = {dblp computer science bibliography, http://dblp.org},
    biburl = {http://dblp.uni-trier.de/rec/bib/journals/sigmobile/RaychaudhuriNV12},
    doi = {10.1145/2412096.2412098},
    journal = {Mobile Computing and Communications Review},
    link = {http://doi.acm.org/10.1145/2412096.2412098},
    number = {3},
    pages = {2--13},
    timestamp = {Mon, 21 Jan 2013 13:38:12 +0100},
    title = {MobilityFirst: a robust and trustworthy mobility-centric architecture for the future internet},
    volume = {16},
    year = {2012}
    }


  • Pros & Cons of Model-based Bandwidth Control for Client-assisted Content Delivery
    Abhigyan Sharma, Arun Venkataramani, Antonio Augusto de Aragão Rocha,
    CoRR, Volume:abs/1209.5651 2012
    [+]Show Abstract
    A key challenge in client-assisted content delivery is determining how to allocate limited server bandwidth across a large number of files being concurrently served so as to optimize global performance and cost objectives. In this paper, we present a comprehensive experimental evaluation of strategies to control server bandwidth allocation. As part of this effort, we introduce a new model-based control approach that relies on an accurate yet concise "cheat sheet" based on a priori offline measurement to predict swarm performance as a function of the server bandwidth and other swarm parameters. Our evaluation using a prototype system, SwarmServer, instantiating static, dynamic, and model-based controllers shows that static and dynamic controllers can both be suboptimal due to different reasons. In comparison, a model-based approach consistently outperforms both static and dynamic approaches provided it has access to detailed measurements in the regime of interest. Nevertheless, the broad applicability of a model-based approach may be limited in practice because of the overhead of developing and maintaining a comprehensive measurement-based model of swarm performance in each regime of interest.
    [+]Show Bib
    @article{DBLP:journals/corr/abs-1209-5651,
    author = {Abhigyan Sharma and Arun Venkataramani and Antonio Augusto de Arag\~ao Rocha},
    bibsource = {dblp computer science bibliography, http://dblp.org},
    biburl = {http://dblp.uni-trier.de/rec/bib/journals/corr/abs-1209-5651},
    journal = {CoRR},
    link = {http://arxiv.org/abs/1209.5651},
    timestamp = {Wed, 10 Oct 2012 21:28:55 +0200},
    title = {Pros \& Cons of Model-based Bandwidth Control for Client-assisted Content Delivery},
    volume = {abs/1209.5651},
    year = {2012}
    }


  • Distributing Content Simplifies ISP Traffic Engineering
    Abhigyan Sharma, Arun Venkataramani, Ramesh K. Sitaraman,
    CoRR, Volume:abs/1209.5715 2012
    [+]Show Abstract
    Several major Internet service providers (e.g., Level-3, AT&T, Verizon) today also offer content distribution services. The emergence of such "Network-CDNs" (NCDNs) are driven by market forces that place more value on content services than just carrying the bits. NCDNs are also necessitated by the need to reduce the cost of carrying ever-increasing volumes of traffic across their backbones. An NCDN has the flexibility to determine both where content is placed and how traffic is routed within the network. However NCDNs today continue to treat traffic engineering independently from content placement and request redirection decisions. In this paper, we investigate the interplay between content distribution strategies and traffic engineering and ask how an NCDN should engineer traffic in a content-aware manner. Our experimental analysis, based on traces from a large content distribution network and real ISP topologies, shows that effective content placement can significantly simplify traffic engineering and in most cases obviate the need to engineer NCDN traffic all together! Further, we show that simple demand-oblivious schemes for routing and placement such as InverseCap and LRU suffice as they achieve network costs that are close to the best possible.
    [+]Show Bib
    @article{DBLP:journals/corr/abs-1209-5715,
    author = {Abhigyan Sharma and Arun Venkataramani and Ramesh K. Sitaraman},
    bibsource = {dblp computer science bibliography, http://dblp.org},
    biburl = {http://dblp.uni-trier.de/rec/bib/journals/corr/abs-1209-5715},
    journal = {CoRR},
    link = {http://arxiv.org/abs/1209.5715},
    timestamp = {Mon, 17 Jun 2013 09:12:32 +0200},
    title = {Distributing Content Simplifies ISP Traffic Engineering},
    volume = {abs/1209.5715},
    year = {2012}
    }


  • Anticipatory Wireless Bitrate Control for Blocks
    Xiaozheng Tie, Anand Seetharam, Arun Venkataramani, Deepak Ganesan, Dennis Goeckel,
    ACM SIGCOMM Conference on Emerging Networking Experiments and Technologies (CoNEXT), December 2011,
    A related poster paper appeared at USENIX NSDI'11.
    [+]Show Abstract
    We present BlockRate, a wireless bitrate adaptation algorithm designed for blocks, or large contiguous units of transmitted data, as opposed to small packets. Our work is motivated by the observation that recent research results suggest significant overhead amortization benefits of blocks. Yet state-of-the-art bitrate algorithms are optimized for adaptation on a per-packet basis, so they can either have the amortization benefits of blocks or high responsiveness to underlying channel conditions of packets, but not both. To bridge this disparity, BlockRate employs multiple bitrates within a block that are predictive of future channel conditions. In each feedback round, BlockRate uses a historybased scheme to predict the SNR for packets within the next block. In slow-changing scenarios as under pedestrian mobility, BlockRate uses a simple linear regression model to predict the SNR trend over the next block. In fast-changing scenarios as under vehicular mobility, BlockRate uses a path loss model to capture more significant SNR variations within a block. We have implemented a prototype of BlockRate in a commodity 802.11 driver and evaluated it via deployment on an indoor mesh testbed as well as an outdoor vehicular testbed. Our evaluation shows that BlockRate achieves up to 1.4× and 2.8× improvement in goodput under indoor and outdoor mobility respectively.
    [+]Show Bib
    @inproceedings{blockrate,
    author = {Xiaozheng Tie, Anand Seetharam, Arun Venkataramani, Deepak Ganesan, Dennis Goeckel},
    booktitle = {ACM SIGCOMM Conference on Emerging Networking Experiments and Technologies (CoNEXT)},
    month = {December},
    title = {Anticipatory Wireless Bitrate Control for Blocks},
    year = {2011}
    }


  • R3: Robust Replication Routing in Wireless Networks with Diverse Connectivity Characteristics
    Xiaozheng Tie, Arun Venkataramani, Aruna Balasubramanian,
    ACM MOBICOM, September 2011,
    Extended technical report with full proofs and additional experiments.
    [+]Show Abstract
    Our work is motivated by a simple question: can we design a simple routing protocol that ensures robust performance across networks with diverse connectivity characteristics such as meshes, MANETs, and DTNs? We identify packet replication as a key structural difference between protocols designed for opposite ends of the connectivity spectrum---DTNs and meshes. We develop a model to quantify under what conditions and by how much replication improves packet delays, and use these insights to drive the design of R3, a routing protocol that self-adapts replication to the extent of uncertainty in network path delays. We implement and deploy R3 on a mesh testbed and a DTN testbed. To the best of our knowledge, R3 is the first routing protocol to be deployed and evaluated on both a DTN testbed and a mesh testbed. We evaluate its performance through deployment, trace-driven simulations, and emulation experiments. Our results show that R3 achieves significantly better delay and goodput over existing protocols in a variety of network connectivity and load conditions.
    [+]Show Bib
    @inproceedings{R3,
    author = {Xiaozheng Tie, Arun Venkataramani, Aruna Balasubramanian},
    booktitle = {ACM MOBICOM},
    link = {http://www.cs.umass.edu/~arun/papers/R3.pdf},
    month = {September},
    title = {R3: Robust Replication Routing in Wireless Networks with Diverse Connectivity Characteristics},
    year = {2011}
    }


  • ZZ and the Art of Practical BFT Execution
    Tim Wood, Rahul Singh, Arun Venkataramani, Emmanuel Cecchet, Prashant Shenoy,
    ACM SIGOPS EuroSys, April 2011,
    Extended technical report with full proofs.
    [+]Show Abstract
    The high replication cost of Byzantine fault-tolerance (BFT) methods has been a major barrier to their widespread adoption in commercial distributed applications. We present ZZ, a new approach that reduces the replication cost of BFT services from 2f+1 to practically f+1. The key insight in ZZ is to use f+1 execution replicas in the normal case and to activate additional replicas only upon failures. In data centers where multiple applications share a physical server, ZZ reduces the aggregate number of execution replicas running in the data center, improving throughput and response times. ZZ relies on virtualization---a technology already employed in modern data centers---for fast replica activation upon failures, and enables newly activated replicas to immediately begin processing requests by fetching state on-demand. A prototype implementation of ZZ using the BASE library and Xen shows that, when compared to a system with 2f+1 replicas, our approach yields lower response times and up to 33% higher throughput in a prototype data center with four BFT web applications. We also show that ZZ can handle simultaneous failures and achieve sub-second recovery.
    [+]Show Bib
    @inproceedings{ZZ,
    author = {Tim Wood, Rahul Singh, Arun Venkataramani, Emmanuel Cecchet, Prashant Shenoy},
    booktitle = {ACM SIGOPS EuroSys},
    link = {http://www.cs.umass.edu/~arun/papers/ZZ.pdf},
    month = {April},
    title = {ZZ and the Art of Practical BFT Execution},
    year = {2011}
    }


  • Beyond MLU: An Application-Centric Comparison of Traffic Engineering Schemes
    Abhigyan, Aditya Mishra, Vikas Kumar, Arun Venkataramani,
    IEEE INFOCOM, April 2011,
    A related poster appeared at NSDI'09.
    Extended technical report.
    [+]Show Abstract
    Traffic engineering (TE) has been long studied as a network optimization problem, but its impact on user-perceived application performance has received little attention. Our paper takes a first step to address this disparity. Using real traffic matrices and topologies from three ISPs, we conduct very largescale experiments simulating ISP traffic as an aggregate of a large number of TCP flows. Our application-centric, empirical approach yields two rather unexpected findings. First, link utilization metrics, and MLU in particular, are poor predictors of application performance. Despite significant differences in MLU, all TE schemes and even a static shortest-path routing scheme achieve nearly identical application performance. Second, application adaptation in the form of location diversity, i.e., the ability to download content from multiple potential locations, significantly improves the capacity achieved by all schemes. Even the ability to download from just 2–4 locations enables all TE schemes to achieve near-optimal capacity, and even static routing to be within 30 % of optimal. Our findings call into question the value of TE as practiced today, and compel us to significantly rethink the TE problem in the light of application adaptation.
    [+]Show Bib
    @inproceedings{diverseTE,
    author = {Abhigyan, Aditya Mishra, Vikas Kumar, Arun Venkataramani},
    booktitle = {IEEE INFOCOM},
    link = {http://www.cs.umass.edu/~arun/papers/diverseTE.pdf},
    month = {April},
    title = {Beyond MLU: An Application-Centric Comparison of Traffic Engineering Schemes},
    year = {2011}
    }


  • Disaster Recovery as a Cloud Service: Economic Benefits & Deployment Challenges
    Timothy Wood, Emmanuel Cecchet, K.K. Ramakrishnan, Prashant Shenoy, Jacobus van der Merwe, Arun Venkataramani,
    2nd Workshop on Hot Topics in Cloud Computing (HotCloud), June 2010
    [+]Show Abstract
    Many businesses rely on Disaster Recovery (DR) services to prevent either manmade or natural disasters from causing expensive service disruptions. Unfortunately, current DR services come either at very high cost, or with only weak guarantees about the amount of data lost or time required to restart operation after a failure. In this work, we argue that cloud computing platforms are well suited for offering DR as a service due to their pay-as-you-go pricing model that can lower costs, and their use of automated virtual platforms that can minimize the recovery time after a failure. To this end, we perform a pricing analysis to estimate the cost of running a public cloud based DR service and show significant cost reductions compared to using privately owned resources. Further, we explore what additional functionality must be exposed by current cloud platforms and describe what challenges remain in order to minimize cost, data loss, and recovery time in cloud based DR services.
    [+]Show Bib
    @inproceedings{dr-cloud,
    author = {Timothy Wood, Emmanuel Cecchet, K.K. Ramakrishnan, Prashant Shenoy, Jacobus van der Merwe, Arun Venkataramani},
    booktitle = {2nd Workshop on Hot Topics in Cloud Computing (HotCloud)},
    link = {http://www.cs.umass.edu/~arun/papers/dr-cloud.pdf},
    month = {June},
    title = {Disaster Recovery as a Cloud Service: Economic Benefits & Deployment Challenges},
    year = {2010}
    }


  • Contracts: Practical contribution incentives for P2P live streaming
    Michael Piatek, Arvind Krishnamurthy, Arun Venkataramani, Richard Yang, David Zhang, Alexander Jaffe,
    USENIX NSDI, April 2010
    [+]Show Abstract
    PPLive is a popular P2P video system used daily by mil- lions of people worldwide. Achieving this level of scala- bility depends on users making contributions to the sys- tem, but currently, these contributions are neither verified nor rewarded. In this paper, we describe the design and implementation of Contracts, a new, practical approach to providing contribution incentives in P2P live stream- ing systems. Using measurements of tens of thousands of PPLive users, we show that widely-used bilateral in- centive strategies cannot be effectively applied to the live streaming environment. Contracts adopts a different ap- proach: rewarding globally effective contribution with improved robustness. Using a modified PPLive client, we show that Contracts both improves performance and strengthens contribution incentives. For example, in our experiments, the fraction of PPLive clients using Con- tracts experiencing loss-free playback is more than 4 times that of native PPLive.
    [+]Show Bib
    @inproceedings{contracts,
    author = {Michael Piatek, Arvind Krishnamurthy, Arun Venkataramani, Richard Yang, David Zhang, Alexander Jaffe},
    booktitle = {USENIX NSDI},
    link = {http://www.cs.umass.edu/~arun/papers/contracts.pdf},
    month = {April},
    title = {Contracts: Practical contribution incentives for P2P live streaming},
    year = {2010}
    }


  • Replication routing DTNs: A resource allocation approach
    Aruna Balasubramanian, Brian Levine, Arun Venkataramani,
    To appear in IEEE/ACM Transactions on Networking (ToN), 2010,
    An earlier version appeared at ACM SIGCOMM'07.
    [+]Show Abstract
    Routing protocols for disruption-tolerant networks (DTNs) use a variety of mechanisms, including discovering the meeting probabilities among nodes, packet replication, and network coding. The primary focus of these mechanisms is to increase the likelihood of finding a path with limited information, and so these approaches have only an incidental effect on such routing metrics as maximum or average delivery delay. In this paper, we present RAPID, an intentional DTN routing protocol that can optimize a specific routing metric such as the worst-case delivery delay or the fraction of packets that are delivered within a deadline. The key insight is to treat DTN routing as a resource allocation problem that translates the routing metric into per-packet utilities that determine how packets should be replicated in the system. We evaluate RAPID rigorously through a prototype deployed over a vehicular DTN testbed of 40 buses and simulations based on real traces. To our knowledge, this is the first paper to report on a routing protocol deployed on a real outdoor DTN. Our results suggest that RAPID significantly outperforms existing routing protocols for several metrics. We also show empirically that for small loads, RAPID is within 10% of the optimal performance.
    [+]Show Bib
    @inproceedings{RAPID-ToN,
    author = {Aruna Balasubramanian, Brian Levine, Arun Venkataramani},
    booktitle = {To appear in IEEE/ACM Transactions on Networking (ToN)},
    link = {http://www.cs.umass.edu/~arun/papers/RAPID-ToN.pdf},
    title = {Replication routing DTNs: A resource allocation approach},
    year = {2010}
    }


  • Estimating self-sustainability in peer-to-peer swarming systems
    Daniel S. Menasché, Antonio Augusto de Aragão Roch Edmundo de Souza e Silva, Rosa Maria Meri Leão, Donald F. Towsley, Arun Venkataramani,
    Performance Evalulation 67(11), 2010
    [+]Show Abstract
    Peer-to-peer swarming is one of the de facto solutions for distributed content dissemination in today's Internet. By leveraging resources provided by clients, swarming systems reduce the load on and costs to publishers. However, there is a limit to how much cost savings can be gained from swarming; for example, for unpopular content peers will always depend on the publisher in order to complete their downloads. In this paper, we investigate this dependence. For this purpose, we propose a new metric, namely swarm self-sustainability. A swarm is referred to as self-sustaining if all its blocks are collectively held by peers; the self-sustainability of a swarm is the fraction of time in which the swarm is self-sustaining. We pose the following question: how does the self-sustainability of a swarm vary as a function of content popularity, the service capacity of the users, and the size of the file? We present a model to answer the posed question. We then propose efficient solution methods to compute self-sustainability. The accuracy of our estimates is validated against simulation. Finally, we also provide closed-form expressions for the fraction of time that a given number of blocks is collectively held by peers.
    [+]Show Bib
    @inproceedings{ESS,
    author = {Daniel S. Menasché, Antonio Augusto de Aragão Roch Edmundo de Souza e Silva, Rosa Maria Meri Leão, Donald F. Towsley, Arun Venkataramani},
    booktitle = {Performance Evalulation 67(11)},
    pages = {1243-1258},
    title = {Estimating self-sustainability in peer-to-peer swarming systems},
    year = {2010}
    }


  • Augmenting mobile 3G using WiFi
    Aruna Balasubramanian, Ratul Mahajan, Arun Venkataramani,
    Proceedings of the 8th International Conference on Mobile Systems, Applications, and Services (MobiSys 2010), San Francisco, California, USA, June 15-18, 2010
    [+]Show Abstract
    We investigate if WiFi access can be used to augment 3G capacity in mobile environments. We rst conduct a detailed study of 3G and WiFi access from moving vehicles, in three different cities. We find that the average 3G and WiFi availability across the cities is 87% and 11%, respectively. WiFi throughput is lower than 3G through-put, and WiFi loss rates are higher. We then design a system, called Wiffler, to augments mobile 3G capacity. It uses two key ideas leveraging delay tolerance and fast switching -- to overcome the poor availability and performance of WiFi. For delay tolerant applications, Wiffler uses a simple model of the environment to predict WiFi connectivity. It uses these predictions to delays transfers to offload more data on WiFi, but only if delaying reduces 3G usage and the transfers can be completed within the application's tolerance threshold. For applications that are extremely sensitive to delay or loss (e.g., VoIP), Wiffler quickly switches to 3G if WiFi is unable to successfully transmit the packet within a small time window. We implement and deploy Wiffler in our vehicular testbed. Our experiments show that Wiffler significantly reduces 3G usage. For a realistic workload, the reduction is 45% for a delay tolerance of 60 seconds.
    [+]Show Bib
    @inproceedings{DBLP:conf/mobisys/BalasubramanianMV10,
    author = {Aruna Balasubramanian and Ratul Mahajan and Arun Venkataramani},
    bibsource = {dblp computer science bibliography, http://dblp.org},
    biburl = {http://dblp.uni-trier.de/rec/bib/conf/mobisys/BalasubramanianMV10},
    booktitle = {Proceedings of the 8th International Conference on Mobile Systems, Applications, and Services (MobiSys 2010), San Francisco, California, USA, June 15-18, 2010},
    crossref = {DBLP:conf/mobisys/2010},
    doi = {10.1145/1814433.1814456},
    link = {http://doi.acm.org/10.1145/1814433.1814456},
    pages = {209--222},
    timestamp = {Mon, 21 Jun 2010 10:11:37 +0200},
    title = {Augmenting mobile 3G using WiFi},
    year = {2010}
    }


  • Information plane for determining performance metrics of paths between arbitrary end-hosts on the internet
    Harsha V. Madhyastha, Thomas Anderson, Arvind Krishnamurthy, Arun Venkataramani
    [+]Show Abstract
    Performance metrics between any two arbitrary end-hosts are predicted based upon previous measurements on the Internet between a plurality of geographically dispersed vantage points and clusters of end-hosts. Each cluster comprises end-hosts that are related based upon their IP address prefixes. In response to a central agent that stores the measured data for each of a plurality of predicted paths on the Internet, the vantage points use traceroute software to measure and periodically update performance metrics such as ...

  • Content Availability and Bundling in Peer-to-Peer Swarming Systems
    Daniel Menasche, Antonio Rocha, Bin Li, Donald Towsley, Arun Venkataramani,
    ACM SIGCOMM Conference on Emerging Networking Experiments and Technologies (CoNEXT), December 2009,
    Awarded Best Paper and fast-tracked to IEEE/ACM Transactions on Networking.
    Extended technical report with proofs. A related poster paper appeared at ACM SIGMETRICS'09.
    [+]Show Abstract
    BitTorrent, the immensely popular file swarming system, suffers a fundamental problem: content unavailability. Although swarming scales well to tolerate flash crowds for popular content, it is less useful for unpopular content as peers arriving after the initial rush find the content unavailable. Our primary contribution is a model to quantify content availability in swarming systems. We use the model to analyze the availability and the performance implications of bundling, a strategy commonly adopted by many BitTorrent publishers today. We find that even a limited amount of bundling exponentially reduces content unavailability. Surprisingly, for swarms with highly unavailable publishers, the availability gain of bundling can result in a net improvement in download time, i.e., peers obtain more content in less time. We empirically confirm the model’s conclusions through experiments on PlanetLab using the mainline BitTorrent client.
    [+]Show Bib
    @inproceedings{bundling,
    author = {Daniel Menasche, Antonio Rocha, Bin Li, Donald Towsley, Arun Venkataramani},
    booktitle = {ACM SIGCOMM Conference on Emerging Networking Experiments and Technologies (CoNEXT)},
    link = {http://www.cs.umass.edu/~arun/papers/bundling.pdf},
    month = {December},
    title = {Content Availability and Bundling in Peer-to-Peer Swarming Systems},
    year = {2009}
    }


  • Energy Consumption in Mobile Phones: A Measurement Study and Implications for Network Applications
    Niranjan Balasubramanian, Aruna Balasubramanian, Arun Venkataramani,
    ACM SIGCOMM Internet Measurement Conference (IMC), November 2009
    [+]Show Abstract
    In this paper, we present a measurement study of the energy consumption characteristics of three widespread mobile networking technologies: 3G, GSM, and WiFi. We find that 3G and GSM incur a high tail energy overhead because of lingering in high power states after completing a transfer. Based on these measurements, we develop a model for the energy consumed by network activity for each technology. Using this model, we develop TailEnder, a protocol that reduces energy consumption of common mobile applications. For applications that can tolerate a small delay such as e-mail, TailEnder schedules transfers so as to minimize the cumulative energy consumed meeting user-specified deadlines. We show that the TailEnder scheduling algorithm is within a factor 2x of the optimal and show that any online algorithm can at best be within a factor 1.62x of the optimal. For applications like web search that can benefit from prefetching, TailEnder aggressively prefetches several times more data and improves user-specified response times while consuming less energy. We evaluate the benefits of TailEnder for three different case study applications - email, news feeds, and web search - based on real user logs and show significant reduction in energy consumption in each case. Experiments conducted on the mobile phone show that TailEnder can download 60% more news feed updates and download search results for more than 50% of web queries, compared to using the default policy.
    [+]Show Bib
    @inproceedings{TailEnder,
    author = {Niranjan Balasubramanian, Aruna Balasubramanian, Arun Venkataramani},
    booktitle = {ACM SIGCOMM Internet Measurement Conference (IMC)},
    link = {http://www.cs.umass.edu/~arun/papers/TailEnder.pdf},
    month = {November},
    title = {Energy Consumption in Mobile Phones: A Measurement Study and Implications for Network Applications},
    year = {2009}
    }


  • Modeling Chunk Availability in P2P Swarming Systems
    Daniel Menasche, Antonio Rocha, Edmundo de Souza e Silva, Rosa M. Meri Leao, Don Towsley, Arun Venkataramani,
    ACM SIGMETRICS Workshop on MAthematical Performance Modeling and Analysis (MAMA), June 2009
    [+]Show Abstract
    Peer-to-peer swarming systems, such as BitTorrent, are usually deployed for the dissemination of popular content. Popular content naturally gets highly replicated in the network and capacity scales with demand ensuring high performance for peers requesting popular content. Nevertheless, the behavior of swarming systems in the face of unpopular content and small populations of users also deserves attention. First, it is important to understand what is the popularity threshold above which the use of swarming systems is ...
    [+]Show Bib
    @inproceedings{chunkav,
    author = {Daniel Menasche, Antonio Rocha, Edmundo de Souza e Silva, Rosa M. Meri Leao, Don Towsley, Arun Venkataramani},
    booktitle = {ACM SIGMETRICS Workshop on MAthematical Performance Modeling and Analysis (MAMA)},
    link = {http://www.cs.umass.edu/~arun/papers/chunkav.pdf},
    month = {June},
    title = {Modeling Chunk Availability in P2P Swarming Systems},
    year = {2009}
    }


  • Modeling Content Availability in Peer-to-Peer Swarming Systems
    Daniel Menasche, Antonio Rocha, Bin Li, Don Towsley, Arun Venkataramani,
    ACM SIGMETRICS poster paper, June 2009
    [+]Show Abstract
    In this paper we model peer-to-peer systems where content may become unavailable. First, using Markov Chains we capture the impact of parameters such as the server upload rate and the popularity of the files on the performance of the system. Then, we show the applicability of our model to the problem of optimal bundling. Bundling files together, rather than disseminating them separately, is a possible solution to improve availability in BitTorrent.
    [+]Show Bib
    @inproceedings{bundling2,
    author = {Daniel Menasche, Antonio Rocha, Bin Li, Don Towsley, Arun Venkataramani},
    booktitle = {ACM SIGMETRICS poster paper},
    link = {http://www.cs.umass.edu/~arun/papers/bundling2.pdf},
    month = {June},
    title = {Modeling Content Availability in Peer-to-Peer Swarming Systems},
    year = {2009}
    }


  • iPlane Nano: Path Prediction for Peer-to-Peer Applications
    Harsha Madhyastha, Ethan Katz-Bassett, Thomas Anderson, Arvind Krishnamurthy, Arun Venkataramani,
    USENIX NSDI, April 2009
    [+]Show Abstract
    Many peer-to-peer distributed applications can benefit from accurate predictions of Internet path performance. Existing approaches either 1) achieve high accuracy for sophisticated path properties, but adopt an unscalable centralized approach, or 2) are lightweight and decentralized, but work only for latency prediction. In this paper, we present the design and implementation of iPlane Nano, a library for delivering Internet path information to peer-to-peer applications. iPlane Nano is itself a peer-to-peer application, and scales to a large number of end hosts with little centralized infrastructure and with a low cost of participation. The key enabling idea underlying iPlane Nano is a compact model of Internet routing. Our model can accurately predict end-to-end PoP-level paths, latencies, and loss rates between arbitrary hosts on the Internet, with 70% of AS paths predicted exactly in our evaluation set. Yet our model can be stored in less than 7MB and updated with approximately 1MB/day. Our evaluation of iPlane Nano shows that it can provide significant performance improvements for large-scale applications. For example, iPlane Nano yields near-optimal download performance for both small and large files in a P2P content delivery system.
    [+]Show Bib
    @inproceedings{iNano,
    author = {Harsha Madhyastha and Ethan Katz-Bassett and Thomas Anderson and Arvind Krishnamurthy and Arun Venkataramani},
    booktitle = {USENIX NSDI},
    link = {http://www.cs.umass.edu/~arun/papers/iNano.pdf},
    month = {April},
    title = {iPlane Nano: Path Prediction for Peer-to-Peer Applications},
    year = {2009}
    }


  • Block-switched Networks: A New Paradigm for Wireless Transport
    Ming Li, Devesh Agrawal, Deepak Ganesan, Arun Venkataramani,
    USENIX NSDI, April 2009
    [+]Show Abstract
    TCP has well-known problems over multi-hop wireless networks as it conflates congestion and loss, performs poorly over time-varying and lossy links, and is fragile in the presence of route changes and disconnections. Our contribution is a clean-slate design and implementation of a wireless transport protocol, Hop, that uses reliable per-hop block transfer as a building block. Hop is 1) fast, because it eliminates many sources of overhead as well as noisy end-to-end rate control, 2) robust to partitions and route changes because of hop-by-hop control as well as in-network caching, and 3) simple, because it obviates complex end-to-end rate control as well as complex interactions between the transport and link layers. Our experiments over a 20-node multi-hop mesh network show that Hop is dramatically more efficient, achieving better fairness, throughput, delay, and robustness to partitions over several alternate protocols, including gains of more than an order of magnitude in median throughput.
    [+]Show Bib
    @inproceedings{Hop,
    author = {Ming Li and Devesh Agrawal and Deepak Ganesan and Arun Venkataramani},
    booktitle = {USENIX NSDI},
    link = {http://www.cs.umass.edu/~arun/papers/Hop.pdf},
    month = {April},
    title = {Block-switched Networks: A New Paradigm for Wireless Transport},
    year = {2009}
    }


  • Sandpiper: Black-box and Gray-box Resource Management for Virtual Machines
    Timothy Wood, Prashant Shenoy, Arun Venkataramani, Mazin Yousif,
    To appear in Computer Networks Journal Special Issue on Virtualized Data Centers, 2009,
    An earlier version appeared at USENIX NSDI'07.
    [+]Show Abstract
    Virtualization can provide significant benefits in data centers by enabling dynamic virtual machine resizing and migration to eliminate hotspots. We present Sandpiper, a system that automates the task of monitoring and detecting hotspots, determining a new mapping of physical to virtual resources, resizing virtual machines to their new allocations, and initiating any necessary migrations. Sandpiper implements a black-box approach that is fully OS- and application-agnostic and a gray-box approach that exploits OS- and application-level statistics. We implement our techniques in Xen and conduct a detailed evaluation using a mix of CPU, network and memory-intensive applications. Our results show that Sandpiper is able to resolve single server hotspots within 20s and scales well to larger, data center environments. We also show that the gray-box approach can help Sandpiper make more informed decisions, particularly in response to memory pressure.
    [+]Show Bib
    @inproceedings{Sandpiper-CN,
    author = {Timothy Wood, Prashant Shenoy, Arun Venkataramani, Mazin Yousif},
    booktitle = {To appear in Computer Networks Journal Special Issue on Virtualized Data Centers},
    link = {http://www.cs.umass.edu/~arun/papers/sandpipe-CN.pdf},
    title = {Sandpiper: Black-box and Gray-box Resource Management for Virtual Machines},
    year = {2009}
    }


  • Enhancing Interactive Applications in Hybrid Networks
    Aruna Balasubramanian, Brian N. Levine, Arun Venkataramani,
    ACM MOBICOM, September 2008
    [+]Show Abstract
    Mobile Internet users have several options today including high bandwidth cellular data services such as 3G, that may be the choice for many. However, the ubiquity and low cost of WiFi suggests an attractive alternative, namely, opportunistic use of open WiFi access points (APs) or planned municipal mesh networks. Unfortunately, for vehicular users, the intermittent nature of WiFi connectivity makes it challenging to support popular interactive applications such as Web search and browsing. Our work is driven by two questions. 1) How can we enable system support for interactive web applications to tolerate disruptions in WiFi connectivity from mobile nodes? 2) Can opportunistic mobile-to-mobile (m2m) transfers enhance application performance over only using APs, and if so, under what conditions and by how much? We present Thedu, a system that enables access to Web search from moving vehicles. The key idea is to use aggressive prefetching to transform the interactive Web search application into a one-shot request/response process. We deployed a prototype of Thedu on the DieselNet testbed in Amherst, MA, consisting of transit buses averaging 21 on the road at a time. Our deployment results show that Thedu can deliver 4 times as many relevant web pages than not using Thedu. A bus receives relevant web pages with a mean delay of 2.3 minutes and within 0.55 minutes in areas with high AP density. Thedu augments AP connectivity with m2m transfers using a utility-driven DTN routing algorithm and uses caching to exploit query locality. Our analytic model and trace-driven simulations suggest that m2m routing yields little benefit over using APs alone even under moderately dense AP deployment such as in Amherst. With sparsely deployed APs as may be the case in rural areas, our conclusions are more mixed: m2m routing with caching improves the number of relevant responses delivered per bus by up to 58%, but the mean delay is significantly high at 6.7 minutes, calling into question its practicality for interactive applications.
    [+]Show Bib
    @inproceedings{thedu-mobicom,
    author = {Aruna Balasubramanian and Brian N. Levine and Arun Venkataramani},
    booktitle = {ACM MOBICOM},
    link = {http://www.cs.umass.edu/~arun/papers/thedu.pdf},
    month = {September},
    title = {Enhancing Interactive Applications in Hybrid Networks},
    year = {2008}
    }


  • Interactive WiFi Connectivity From Moving Vehicles
    Aruna Balasubramanian, Ratul Mahajan, Arun Venkataramani, Brian N. Levine, John Zahorjan,
    ACM SIGCOMM, August 2008
    [+]Show Abstract
    We ask if the ubiquity of WiFi can be leveraged to provide cheap connectivity from moving vehicles for common applications such as Web browsing and VoIP. Driven by this question, we conduct a study of connection quality available to vehicular WiFi clients based on measurements from testbeds in two different cities. We find that current WiFi handoff methods, in which clients communicate with one basestation at a time, lead to frequent disruptions in connectivity. We also find that clients can overcome many disruptions by communicating with multiple basestations simultaneously. These findings lead us to develop ViFi, a protocol that opportunistically exploits basestation diversity to minimize disruptions and support interactive applications for mobile clients. ViFi uses a decentralized and lightweight probabilistic algorithm for coordination between participating basestations. Our evaluation using a twomonth long deployment and trace-driven simulations shows that its link-layer performance comes close to an ideal diversity-based protocol. Using two applications, VoIP and short TCP transfers, we show that the link layer performance improvement translates to better application performance. In our deployment, ViFi doubles the number of successful short TCP transfers and doubles the length of disruption-free VoIP sessions compared to an existing WiFi-style handoff protocol.
    [+]Show Bib
    @inproceedings{ViFi,
    author = {Aruna Balasubramanian and Ratul Mahajan and Arun Venkataramani and Brian N. Levine and John Zahorjan},
    booktitle = {ACM SIGCOMM},
    link = {http://www.cs.umass.edu/~arun/papers/ViFi.pdf},
    month = {August},
    title = {Interactive WiFi Connectivity From Moving Vehicles},
    year = {2008}
    }


  • Consensus Routing: The Internet as a Distributed System
    John P. John, Ethan Katz-Bassett, Arvind Krishnamurthy, Thomas E. Anderson, Arun Venkataramani,
    USENIX NSDI (Awarded Best Paper), April 2008
    [+]Show Abstract
    Internet routing protocols (BGP, OSPF, RIP) have traditionally favored responsiveness over consistency. A router applies a received update immediately to its forwarding table before propagating the update to other routers, including those that potentially depend upon the outcome of the update. Responsiveness comes at the cost of routing loops and blackholes-a router A thinks its route to a destination is via B but B disagrees. By favoring responsiveness (a liveness property) over consistency (a safety property), Internet routing has lost both. Our position is that consistent state in a distributed system makes its behavior more predictable and securable. To this end, we present consensus routing, a consistency-first approach that cleanly separates safety and liveness using two logically distinct modes of packet delivery: a stable mode where a route is adopted only after all dependent routers have agreed upon it, and a transient mode that heuristically forwards the small fraction of packets that encounter failed links. Somewhat surprisingly, we find that consensus routing improves overall availability when used in conjunction with existing transient mode heuristics such as backup paths, deflections, or detouring. Experiments on the Internet's AS-level topology show that consensus routing eliminates nearly all transient disconnectivity in BGP.
    [+]Show Bib
    @inproceedings{consensus,
    author = {John P. John and Ethan Katz-Bassett and Arvind Krishnamurthy and Thomas E. Anderson and Arun Venkataramani},
    booktitle = {USENIX NSDI (Awarded Best Paper)},
    link = {http://www.cs.umass.edu/~arun/papers/consensus.pdf},
    month = {April},
    title = {Consensus Routing: The Internet as a Distributed System},
    year = {2008}
    }


  • Multi-User Data Sharing in Radar Sensor Networks
    Ming Li, Tingxin Yan, Deepak Ganesan, Eric Lyons, Prashant Shenoy, Arun Venkataramani, Michael Zink,
    ACM SENSYS, November 2007,
    Also appeared as a demo paper at SENSYS 2007.
    [+]Show Abstract
    In this paper, we focus on a network of rich sensors that are geographically distributed and argue that the design of such networks poses very different challenges from traditional "mote-class" sensor network design. We identify the need to handle the diverse requirements of multiple users to be a major design challenge, and propose a utility-driven approach to maximize data sharing across users while judiciously using limited network and computational resources. Our utility-driven architecture addresses three key challenges for such rich multi-user sensor networks: how to define utility functions for networks with data sharing among end-users, how to compress and prioritize data transmissions according to its importance to end-users, and how to gracefully degrade end-user utility in the presence of bandwidth fluctuations. We instantiate this architecture in the context of geographically distributed wireless radar sensor networks for weather, and present results from an implementation of our system on a multi-hop wireless mesh network that uses real radar data with real end-user applications. Our results demonstrate that our progressive compression and transmission approach achieves an order of magnitude improvement in application utility over existing utility-agnostic non-progressive approaches, while also scaling better with the number of nodes in the network.
    [+]Show Bib
    @inproceedings{MUDS,
    author = {Ming Li and Tingxin Yan and Deepak Ganesan and Eric Lyons and Prashant Shenoy and Arun Venkataramani and Michael Zink},
    booktitle = {ACM SENSYS},
    link = {http://www.cs.umass.edu/~arun/papers/MUDS.pdf},
    month = {November},
    title = {Multi-User Data Sharing in Radar Sensor Networks},
    year = {2007}
    }


  • Web Search from a Bus
    Aruna Balasubramanian, Yun Zhou, Bruce Croft, Brian N. Levine, Arun Venkataramani,
    ACM MobiCom Workshop on Challenged Networks (CHANTS), September 2007
    [+]Show Abstract
    Opportunistic connections to the Internet from open wireless access points is now commonly possible in urban areas. Vehicular networks can opportunistically connect to the Internet for several seconds via open access points. In this paper, we adapt the interactive process of web search and retrieval to vehicular networks with intermittent Internet access. Our system, called Thedu, has mobile nodes use an Internet proxy to collect search engine results and prefetch result pages. The mobile nodes download the pre-fetched web pages from the proxy. Our contribution is a novel set of techniques to make aggressive but selective prefetching practical, resulting in a significantly greater number of relevant web results returned to mobile users. In particular, we prioritize responses in the order of the usefulness of the response to the query, that allows the mobile node to download the most useful response first. To evaluate our scheme, we deployed Thedu on DieselNet, our vehicular testbed operating in a micro-urban area around Amherst, MA. Using a simulated workload, we find that users can expect four times as many useful responses to web search queries compared to not using Thedu's mechanisms. Moreover, the mean latency in receiving the first relevant response for a query is 2.7 minutes for our deployment; we expect Thedu to have even better performance in larger cities that have densely populated open APs.
    [+]Show Bib
    @inproceedings{thedu,
    author = {Aruna Balasubramanian and Yun Zhou and Bruce Croft and Brian N. Levine and Arun Venkataramani},
    booktitle = {ACM MobiCom Workshop on Challenged Networks (CHANTS)},
    link = {http://www.cs.umass.edu/~arun/papers/search.pdf},
    month = {September},
    title = {Web Search from a Bus},
    year = {2007}
    }


  • DTN Routing as a Resource Allocation Problem
    Aruna Balasubramanian, Brian N. Levine, Arun Venkataramani,
    ACM SIGCOMM, August 2007,
    Extended technical report with full proofs.
    [+]Show Abstract
    Many DTN routing protocols use a variety of mechanisms, including discovering the meeting probabilities among nodes, packet replication, and network coding. The primary focus of these mechanisms is to increase the likelihood of finding a path with limited information, so these approaches have only an incidental effect on such routing metrics as maximum or average delivery latency. In this paper, we present RAPID, an intentional DTN routing protocol that can optimize a specific routing metric such as worst-case delivery latency or the fraction of packets that are delivered within a deadline. The key insight is to treat DTN routing as a resource allocation problem that translates the routing metric into per-packet utilities which determine how packets should be replicated in the system. We evaluate RAPID rigorously through a prototype of RAPID deployed over a vehicular DTN testbed of 40 buses and simulations based on real traces. To our knowledge, this is the first paper to report on a routing protocol deployed on a real DTN at this scale. Our results suggest that RAPID significantly outperforms existing routing protocols for several metrics. We also show empirically that for small loads RAPID is within 10% of the optimal performance.
    [+]Show Bib
    @inproceedings{RAPID,
    author = {Aruna Balasubramanian and Brian N. Levine and Arun Venkataramani},
    booktitle = {ACM SIGCOMM},
    link = {http://www.cs.umass.edu/~arun/papers/RAPID.pdf},
    month = {August},
    title = {DTN Routing as a Resource Allocation Problem},
    year = {2007}
    }


  • Availability in BitTorrent Systems
    Giovanni Neglia, Giuseppe Reina, Honggang Zhang, Donald Towsley, Arun Venkataramani, John Danaher,
    IEEE INFOCOM, May 2007
    [+]Show Abstract
    In this paper, we investigate the problem of highly available, massive-scale file distribution in the Internet. To this end, we conduct a large-scale measurement study of BitTorrent, a popular class of systems that use swarms of actively downloading peers to assist each other in file distribution. The first generation of BitTorrent systems used a central tracker to enable coordination among peers, resulting in low availability due to the tracker's single point of failure. Our study analyzes the prevalence and impact of two recent trends to improve BitTorrent availability: (i) use of multiple trackers, and (ii) use of Distributed Hash Tables (DHTs), both of which also help to balance load better. The study considered more than 1,400 trackers and 24,000 DHT nodes (extracted from about 20,000 torrents) over a period of two months. We find that both trends improve availability, but for different and somewhat unexpected reasons. Our findings include: (i) multiple trackers improve availability, but the improvement largely comes from the choice of a single highly available tracker, (ii) such improvement is reduced by the presence of correlated failures, (iii) multiple trackers can significantly reduce the connectivity of the overlay formed by peers, (iv) the DHT improves information availability, but induces a higher response latency to peer queries.
    [+]Show Bib
    @inproceedings{BTavailability,
    author = {Giovanni Neglia and Giuseppe Reina and Honggang Zhang and Donald Towsley and Arun Venkataramani and John Danaher},
    booktitle = {IEEE INFOCOM},
    link = {http://www.cs.umass.edu/~arun/papers/BT_availability.pdf},
    month = {May},
    title = {Availability in BitTorrent Systems},
    year = {2007}
    }


  • A Multipath Background Network Architecture
    Ravi Kokku, Aniruddha Bohra, Samrat Ganguly, Arun Venkataramani,
    IEEE INFOCOM, May 2007,
    Extended technical report with full proofs.
    [+]Show Abstract
    Background transfers, or transfers that humans do not actively wait on, dominate the Internet today. In today's best-effort Internet, background transfers can interfere with foreground transfers causing long wait times, thereby hurting human productivity. In this paper, we present the design and implementation of a background network, Harp, that addresses this problem. Harp has three significant advantages over recent end-host based background transport protocols; Harp (i) uses multiple paths to exploit path diversity and load imbalance in the Internet to tailor network resource allocation to human needs, (ii) provides better fairness and utilization compared to unipath end-host protocols, and (ill) can be deployed at either end-hosts or enterprise gateways, thereby aligning the incentive for deployment with the goals of network customers. Our evaluation using simulations and a prototype on Planetlab suggests that Harp improves foreground TCP transfer time by a factor of five and background transfer time by a factor of two using just two extra paths per connection.
    [+]Show Bib
    @inproceedings{harp,
    author = {Ravi Kokku and Aniruddha Bohra and Samrat Ganguly and Arun Venkataramani},
    booktitle = {IEEE INFOCOM},
    link = {http://www.cs.umass.edu/~arun/papers/harp.pdf},
    month = {May},
    title = {A Multipath Background Network Architecture},
    year = {2007}
    }


  • Black-box and Gray-box Strategies for Virtual Machine Migration
    Tim Wood, Prashant Shenoy, Arun Venkataramani, Mazin Yousif,
    USENIX NSDI, April 2007
    [+]Show Abstract
    Virtualization can provide significant benefits in data centers by enabling virtual machine migration to eliminate hotspots. We present Sandpiper, a system that automates the task of monitoring and detecting hotspots, determining a new mapping of physical to virtual resources and initiating the necessary migrations. Sandpiper implements a black-box approach that is fully OS- and application-agnostic and a gray-box approach that exploits OS- and application-level statistics. We implement our techniques in Xen and conduct a detailed evaluation using a mix of CPU, network and memory-intensive applications. Our results show that Sandpiper is able to resolve single server hotspots within 20 seconds and scales well to larger, data center environments. We also show that the gray-box approach can help Sandpiper make more informed decisions, particularly in response to memory pressure.
    [+]Show Bib
    @inproceedings{VM,
    author = {Tim Wood and Prashant Shenoy and Arun Venkataramani and Mazin Yousif},
    booktitle = {USENIX NSDI},
    link = {http://www.cs.umass.edu/~arun/papers/sandpiper.pdf},
    month = {April},
    title = {Black-box and Gray-box Strategies for Virtual Machine Migration},
    year = {2007}
    }


  • Do Incentives Build Robustness in BitTorrent?
    Mike Piatek, Tomas Isdal, Thomas E. Anderson, Arvind Krishnamurthy, Arun Venkataramani,
    USENIX NSDI (Awarded Best Student Paper ), April 2007,
    Download BitTyrant
    Featured article in the USENIX ;login: magazine, August 2007.
    [+]Show Abstract
    A fundamental problem with many peer-to-peer systems is the tendency for users to "free ride"--to consume resources without contributing to the system. The popular file distribution tool BitTorrent was explicitly designed to address this problem, using a tit-for-tat reciprocity strategy to provide positive incentives for nodes to contribute resources to the swarm. While BitTorrent has been extremely successful, we show that its incentive mechanism is not robust to strategic clients. Through performance modeling parameterized by real world traces, we demonstrate that all peers contribute resources that do not directly improve their performance. We use these results to drive the design and implementation of BitTyrant, a strategic BitTorrent client that provides a median 70% performance gain for a 1 Mbit client on live Internet swarms. We further show that when applied universally, strategic clients can hurt average per-swarm performance compared to today's BitTorrent client implementations.
    [+]Show Bib
    @inproceedings{incentives,
    author = {Mike Piatek and Tomas Isdal and Thomas E. Anderson and Arvind Krishnamurthy and Arun Venkataramani},
    booktitle = {USENIX NSDI (Awarded Best Student Paper )},
    link = {http://www.cs.umass.edu/~arun/papers/bittyrant.pdf},
    month = {April},
    title = {Do Incentives Build Robustness in BitTorrent?},
    year = {2007}
    }


  • A Comparison of DAG Scheduling Strategies for Internet-Based Computing
    Rob Hall, Arnold Rosenberg, Arun Venkataramani,
    IEEE International Parallel and Distributed Processing Symposium (IPDPS), March 2007
    [+]Show Abstract
    A fundamental challenge in Internet computing (IC) is to efficiently schedule computations having complex inter job dependencies, given the unpredictability of remote machines, in availability and time of access. The recent IC scheduling theory focuses on these sources of unpredictability by crafting schedules that maximize the number of executable jobs at every point in time. In this paper, we experimentally investigate the key question: does IC scheduling yield significant positive benefits for real IC? To this end, we develop a realistic computation model to match jobs to client machines and conduct extensive simulations to compare IC-optimal schedules against popular, intuitively compelling heuristics. Our results suggest that for a large range of computation-dags, client availability patterns, and two quite different performance metrics, IC-optimal schedules significantly outperform schedules produced by popular heuristics, by as much as 10-20%.
    [+]Show Bib
    @proceedings{IC,
    author = {Rob Hall and Arnold Rosenberg and Arun Venkataramani},
    booktitle = {IEEE International Parallel and Distributed Processing Symposium (IPDPS)},
    link = {http://ieeexplore.ieee.org/document/4227973/},
    month = {March},
    title = {A Comparison of DAG Scheduling Strategies for Internet-Based Computing},
    year = {2007}
    }


  • Safe Speculative Replication
    Mike Dahlin, Arun Iyengar, Ravi Kokku, Amol Nayate, Arun Venkataramani, Praveen Yalagandula,
    Unpublished manuscript, UMass Computer Science, January 2007
    [+]Show Abstract
    This article argues that commonly-studied techniques for speculative replication—such as prefetching or prepushing content to a location before it is requested there—are inherently unsafe: they pose unacceptable risks of catastrophic overload and they may introduce bugs into systems by weakening consistency guarantees. To address these problems, the article introduces SSR, a new, general architecture for Safe Speculative Replication. SSR specifies the mechanisms that control how data flows through the system and leaves as a policy choice the question of what data to replicate to what nodes. SSR’s mechanisms (1) separate invalidations, demand updates, and speculative updates into three logical flows, (2) use per-resource schedulers that prioritize invalidations and demand updates over speculative updates, and (3) use a novel scheduler at the receiver to integrate these three flows in a way that maximizes performance and availability while meeting specified consistency constraints. We demonstrate the SSR architecture via two extensive case studies and show that SSR makes speculative replication practical for widespread adoption by (1) enabling self-tuning speculative replication, (2) cleanly integrating speculative replication with consistency, and (3) providing easy deployability with legacy infrastructure.
    [+]Show Bib
    @misc{ssr,
    author = {Mike Dahlin and Arun Iyengar and Ravi Kokku and Amol Nayate and Arun Venkataramani and Praveen Yalagandula},
    booktitle = {Unpublished manuscript},
    institution = {UMass Computer Science},
    link = {http://www.cs.umass.edu/~arun/papers/ssr.pdf},
    month = {January},
    title = {Safe Speculative Replication},
    year = {2007}
    }


  • Building Bit-Tyrant, a (more) strategic BitTorrent client
    Michael Piatek, Tomas Isdal, Tom Anderson, Arvind Krishnamurthy, Arun Venkataramani
    Journal : login, Volume: 32, Number: 4,
    [+]Show Abstract
    PEER TO PEER SYSTEMS OFTEN APPEAL to scalability as a motivating feature. As more users request data,more users contribute resources. Scaling a service by relying on user contributions—the P2P approach—depends on providing incentives for users to make those contributions. Recently, the popular BitTorrent file distribution tool has emerged as the canonical example of an incentive-aware P2P design. Although BitTorrent has been in widespread use for years and has been studied extensively, we find that its incentive strategy is not foolproof.This article describes BitTyrant, a new, strategic BitTorrent client. For users interested in faster downloads, BitTyrant provides a median 70% performance improvement on live Internet swarms. However, BitTyrant also demonstrates that selfish users can improve performance even while reducing upload contribution, circumventing intended incentives.

  • iPlane: Measurements and query interface
    Harsha V Madhyastha, Thomas Anderson, Arvind Krishnamurthy, Arun Venkataramani
    [+]Show Abstract
    In this document, we describe the application interface to iPlane, an information plane that continuously performs measurements to generate and maintain an annotated map of the Internet with a rich set of link and router attributes. As outlined in [1], applications can interface with iPlane in the following manner: • Applications can query and obtain any of the measurements that were collected by iPlane. • iPlane’s measurements are limited to those discovered using probes from its vantage points. However, applications might desire path properties between arbitary end-hosts. Hence, iPlane exports a query interface through which applications can request predictions of path properties between arbitrary end-hosts. • Applications can choose to upload to iPlane a few (typically 10) traceroutes outgoing from their clients. The traceroutes from a client help iPlane capture the routing diversity near that client for better predicting the performance of paths from it. While this information improves prediction accuracy, iPlane can also operate without having this information. iPlane maintains a database-like view of path properties between pairs of end-hosts. For every source-destination pair, there exists a row in the view with iPlane’s predicted path between these hosts and the predicted performance of the path. Any query to iPlane involves an SQL-like query on this view – selecting some rows and columns, joining the view with itself, sorting rows based on values in certain columns, and so on. We provide tools that applications can use for interacting with iPlane. We also make the source code for all our tools publicly available. In subsequent sections, we describe the tools we provide for uploading traceroutes and for issuing queries to iPlane.

  • iPlane: An Information Plane for Distributed Services
    Harsha Madhyastha, Tomas Isdal, Mike Piatek, Colin Dixon, Thomas E. Anderson, Arvind Krishnamurthy, Arun Venkataramani,
    USENIX OSDI, December 2006,
    Also presented as a demo paper at WORLDS'06 held in conjunction with OSDI, and at NANOG 40, June'07.
    [+]Show Abstract
    In this paper, we present the design, implementation, and evaluation of iPlane, a scalable service providing accurate predictions of Internet path performance for emerging overlay services. Unlike the more common black box latency prediction techniques in use today, iPlane adopts a structural approach and predicts end-to-end path performance by composing the performance of measured segments of Internet paths. For the paths we observed, this method allows us to accurately and efficiently predict latency, bandwidth, capacity and loss rates between arbitrary Internet hosts. We demonstrate the feasibility and utility of the iPlane service by applying it to several representative overlay services in use today: content distribution, swarming peer-to-peer filesharing, and voice-over-IP. In each case, using iPlane's predictions leads to improved overlay performance.
    [+]Show Bib
    @inproceedings{iplane,
    author = {Harsha Madhyastha and Tomas Isdal and Mike Piatek and Colin Dixon and Thomas E. Anderson and Arvind Krishnamurthy and Arun Venkataramani},
    booktitle = {USENIX OSDI},
    link = {http://www.cs.umass.edu/~arun/papers/iplane.pdf},
    month = {December},
    title = {iPlane: An Information Plane for Distributed Services},
    year = {2006}
    }


  • Online Hierarchical Cooperative Caching
    Xiaozhou Li, C. Greg Plaxton, Mitul Tiwari, Arun Venkataramani,
    Theory of Computing Systems Journal, Volume:Vol. 39, No. 6 November 2006,
    This journal paper extends the deterministic lower bound result in the conference paper to randomized online algorithms.
    [+]Show Abstract
    We address a hierarchical generalization of the well-known disk paging problem. In the hierarchical cooperative caching problem, a set of n machines residing in an ultrametric space cooperate with one another to satisfy a sequence of read requests to a collection of read-only files. A seminal result in the area of competitive analysis states that the ”least recently used” (LRU) paging algorithm is constant-competitive if it is given a constant-factor blowup in capacity over the offline algorithm. Does such a constant-competitive deterministic algorithm, with a constant-factor blowup in the machine capacities, exist for the hierarchical cooperative caching problem? In this paper, we present a deterministic hierarchical generalization of LRU that is constant-competitive when the capacity blowup is linear in d, the depth of the cache hierarchy. Furthermore, we exhibit an infinite family of depth-d hierarchies such that any randomized hierarchical cooperative caching algorithm with capacity blowup b has competitive ratio (log d/b) against an oblivious adversary. Thus, our upper and lower bounds imply a tight bound of (d) on the capacity blowup required to achieve constant competitiveness.
    [+]Show Bib
    @article{hcc-tocs,
    author = {Xiaozhou Li and C. Greg Plaxton and Mitul Tiwari and Arun Venkataramani},
    journal = {Theory of Computing Systems Journal},
    link = {http://www.cs.umass.edu/~arun/papers/hcc-tocs.pdf},
    month = {November},
    title = {Online Hierarchical Cooperative Caching},
    volume = {Vol. 39, No. 6},
    year = {2006}
    }


  • A Structural Approach to Internet Path Latency
    Harsha Madhyastha, Thomas E. Anderson, Arvind Krishnamurthy, Neil Spring, Arun Venkataramani,
    ACM SIGCOMM Internet Measurement Conference IMC, October 2006
    [+]Show Abstract
    Several models have been recently proposed for predicting the latency of end to end Internet paths. These models treat the Internet as a black-box, ignoring its internal structure. While these models are simple, they can often fail systematically; for example, the most widely used models use metric embeddings that predict no benefit to detour routes even though half of all Internet routes can benefit from detours.In this paper, we adopt a structural approach that predicts path latency based on measurements of the Internet's routing topology, PoP connectivity, and routing policy. We find that our approach outperforms Vivaldi, the most widely used black-box model. Furthermore, unlike metric embeddings, our approach successfully predicts 65% of detour routes in the Internet. The number of measurements used in our approach is comparable with that required by black box techniques, but using traceroutes instead of pings.
    [+]Show Bib
    @inproceedings{explanatory,
    author = {Harsha Madhyastha and Thomas E. Anderson and Arvind Krishnamurthy and Neil Spring and Arun Venkataramani},
    booktitle = {ACM SIGCOMM Internet Measurement Conference IMC},
    link = {http://www.cs.umass.edu/~arun/papers/explanatory.pdf},
    month = {October},
    title = {A Structural Approach to Internet Path Latency},
    year = {2006}
    }


  • PRACTI Replication
    Nalini Belaramani, Mike Dahlin, Lei Gao, Amol Nayate, Arun Venkataramani, Praveen Yalagandula, Jiandan Zheng,
    USENIX NSDI, March 2006
    [+]Show Abstract
    We present PRACTI, a new approach for large-scale replication. PRACTI systems can replicate or cache any subset of data on any node (Partial Replication), provide a broad range of consistency guarantees (Arbitrary Consistency), and permit any node to send information to any other node (Topology Independence). A PRACTI architecture yields two significant advantages. First, by providing all three PRACTI properties, it enables better trade-offs than existing mechanisms that support at most two of the three desirable properties. The PRACTI approach thus exposes new points in the design space for replication systems. Second, the flexibility of PRACTI protocols simplifies the design of replication systems by allowing a single architecture to subsume a broad range of existing systems and to reduce development costs for new ones. To illustrate both advantages, we use our PRACTI prototype to emulate existing server replication, client-server, and object replication systems and to implement novel policies that improve performance for mobile users, web edge servers, and grid computing by as much as an order of magnitude.
    [+]Show Bib
    @inproceedings{belaramani06practi,
    author = {Nalini Belaramani and Mike Dahlin and Lei Gao and Amol Nayate and Arun Venkataramani and Praveen Yalagandula and Jiandan Zheng},
    booktitle = {USENIX NSDI},
    link = {http://www.cs.umass.edu/~arun/papers/PRACTI.pdf},
    month = {March},
    title = {PRACTI Replication},
    year = {2006}
    }


  • Oasis: An Overlay-Aware Network Stack
    Harsha Madhyastha, Thomas E. Anderson, Arvind Krishnamurthy, Arun Venkataramani,
    ACM SIGOPS Operating Systems Review, Volume:40(1) Pages: 41-48, January 2006
    [+]Show Abstract
    Overlays have enabled several new and popular distributed applications such as Akamai, Kazaa, and Bittorrent. However, the lack of an overlay-aware network stack has hindered the widespread use of general purpose overlay packet delivery services [16, 29, 26]. In this paper, we describe the design and implementation of Oasis, a system and toolkit that enables legacy operating systems to access overlay-based packet delivery services. Oasis combines a set of ideas – network address translation, name resolution, packet capture, dynamic code execution – to provide greater user choice. We are in the process of making the Oasis toolkit available for public use, specifically, to ease the development of PlanetLab-based packet delivery services.
    [+]Show Bib
    @article{oasis,
    author = {Harsha Madhyastha and Thomas E. Anderson and Arvind Krishnamurthy and Arun Venkataramani},
    journal = {ACM SIGOPS Operating Systems Review},
    link = {http://www.cs.umass.edu/~arun/papers/oasis.pdf},
    month = {January},
    pages = {41-48},
    title = {Oasis: An Overlay-Aware Network Stack},
    volume = {40(1)},
    year = {2006}
    }


  • Mechanisms and Algorithms for Large-Scale Replication Systems
    Arun Venkataramani,
    PhD Thesis, Computer Sciences, University of Texas at Austin, December 2004
    [+]Show Abstract
    Replication of data and services is a fundamental building block in the design of distributed systems. Though replication has several benefits for large-scale systems distributed across wide-area networks, it also introduces considerable complexity over a centralized unreplicated system. This thesis contributes mechanisms and algorithms that enable the design of simpler and more efficient large-scale replicated systems. On the mechanism front, we present aggressive speculative replication (ASR) as a basic primitive in building large-scale replicated systems. ASR refers to aggressively trading-off costs of hardware resources for making replicas of content and updates in a speculative manner in return for savings in human time and improved service quality. Today, replicated systems are unable to avail of the benefits of ASR as current architectures rely on manually-tuned parameters to manage the complexity of large-scale replicated systems. Consequently, such systems are hard to build and maintain, inefficient in utilizing available resources, and prone to the risk of overload. As a solution, we present an architecture, Mars, that performs ASR in a self-tuning manner, i.e. without the need for manual tuning. To enable realization of Mars' architecture in practical systems, we build TCP Nice, an end-to-end transport protocol for background transfers. We demonstrate the benefits of Mars through a case study of a Web prefetching system, NPS, and show that the Mars approach simplifies the design, efficiently uses available resources to give performance benefits, is robust to the risk of system overload, and is easily deployable without changing existing infrastructure by using only simple end-to-end mechanisms. On the algorithmic front, we make three contributions. First, we present a speculative replication algorithm, namely, long-term prefetching, to minimize average response times at a large cache constrained by bandwidth and validate its effectiveness through simulations using Web traces. Next, we develop a speculative replication algorithm for minimizing response times in a set of hierarchically-organized distributed cooperating caches constrained by bandwidth, and show that it is constant-competitive. Finally, we study the theoretically intriguing problem of minimizing average response times in a set of hierarchically-organized distributed cooperating caches constrained by storage space and show a nonconstant lower bound on the competitive ratio of any online algorithm that is allowed at most a constant-factor space advantage.
    [+]Show Bib
    @phdthesis{phd-thesis,
    author = {Arun Venkataramani},
    institution = {Computer Sciences, University of Texas at Austin},
    link = {http://www.cs.umass.edu/~arun/papers/arun_phdthesis.pdf},
    month = {December},
    title = {Mechanisms and Algorithms for Large-Scale Replication Systems},
    year = {2004}
    }


  • Online Hierarchical Cooperative Caching
    Xiaozhou Li, C. Greg Plaxton, Mitul Tiwari, Arun Venkataramani,
    ACM SPAA (Award paper invited for fast-track journal publication), June 2004
    [+]Show Abstract
    We address a hierarchical generalization of the well-known disk paging problem. In the hierarchical cooperative caching problem, a set of n machines residing in an ultrametric space cooperate with one another to satisfy a sequence of read requests to a collection of (read-only) files. A seminal result in the area of competitive analysis states that LRU (the widelyused deterministic online paging algorithm based on the “least recently used” eviction policy) is constant-competitive if it is given a constant-factor blowup in capacity over the offline algorithm. Does such a constant-competitive deterministic algorithm (with a constant-factor blowup in the machine capacities) exist for the hierarchical cooperative caching problem? The main contribution of the present paper is to answer this question in the negative. More specifi- cally, we establish an Ω(log log n) lower bound on the competitive ratio of any online hierarchical cooperative caching algorithm with capacity blowup O((log n)^1−ε), where ε denotes an arbitrarily small positive constant.
    [+]Show Bib
    @inproceedings{ohcc-spaa,
    author = {Xiaozhou Li and C. Greg Plaxton and Mitul Tiwari and Arun Venkataramani},
    booktitle = {ACM SPAA (Award paper invited for fast-track journal publication)},
    link = {http://www.cs.umass.edu/~arun/papers/ohcc-spaa.pdf},
    location = {Barcelona, Spain},
    month = {June},
    pages = {74--83},
    title = {Online Hierarchical Cooperative Caching},
    year = {2004}
    }


  • Separating Agreement from Execution for Byzantine Fault Tolerant Services
    Jian Yin, Jean-Philippe Martin, Arun Venkataramani, Lorenzo Alvisi, Mike Dahlin,
    ACM SOSP, October 2003
    [+]Show Abstract
    We describe a new architecture for Byzantine fault tolerant state machine replication that separates agreement that orders requests from execution that processes requests. This separation yields two fundamental and practically significant advantages over previous architectures. First, it reduces replication costs because the new architecture can tolerate faults in up to half of the state machine replicas that execute requests. Previous systems can tolerate faults in at most a third of the combined agreement/state machine replicas. Second, separating agreement from execution allows a general privacy firewall architecture to protect confidentiality through replication. In contrast, replication in previous systems hurts confidentiality because exploiting the weakest replica can be sufficient to compromise the system. We have constructed a prototype and evaluated it running both microbenchmarks and an NFS server. Overall, we find that the architecture adds modest latencies to unreplicated systems and that its performance is competitive with existing Byzantine fault tolerant systems.
    [+]Show Bib
    @inproceedings{separating-sosp,
    author = {Jian Yin and Jean-Philippe Martin and Arun Venkataramani and Lorenzo Alvisi and Mike Dahlin},
    booktitle = {ACM SOSP},
    link = {http://www.cs.umass.edu/~arun/papers/separation-sosp.pdf},
    location = {Bolton Landing, NY, USA},
    month = {October},
    pages = {253--267},
    title = {Separating Agreement from Execution for Byzantine Fault Tolerant Services},
    year = {2003}
    }


  • A Non-Interfering Deployable Web Prefetching System
    Ravi Kokku, Praveen Yalagandula, Arun Venkataramani, Mike Dahlin,
    USENIX Symposium on Internet Technologies and Systems (USITS), March 2003
    [+]Show Abstract
    We present Prefetch-Nice, a novel non-intrusive web prefetching system that (1) systematically avoids interference between prefetch and demand requests at the server as well as in the network by utilizing only spare resources, and (2) is deployable without any modifications to the browsers, the HTTP protocol and the network. Prefetch-Nice's self-tuning architecture eliminates the necessity of the traditional "threshold" magic numbers typically used to limit intereference, thereby allowing applications to improve bene ts and reduce the risk of aggressive prefetching. For example, we observe that on a real web server trace, our system reduces the average demand request response times by 25% in comparison to a prefetching scheme that uses same prediction techniques and traditional techniques for limiting interference.
    [+]Show Bib
    @inproceedings{nps-usits,
    author = {Ravi Kokku and Praveen Yalagandula and Arun Venkataramani and Mike Dahlin},
    booktitle = {USENIX Symposium on Internet Technologies and Systems (USITS)},
    link = {http://www.cs.umass.edu/~arun/papers/nps-usits.pdf},
    month = {March},
    title = {A Non-Interfering Deployable Web Prefetching System},
    year = {2003}
    }


  • Towards a Practical Approach to Confidential Byzantine Fault-Tolerance
    Jian Yin, Jean-Philippe Martin, Arun Venkataramani, Lorenzo Alvisi, Mike Dahlin,
    Lecture Notes in Computer Science (LNCS), Volume:2584 Pages: 51-56, 2003,
    First appeared in International Workshop on Future Directions in Distributed Computing (FuDiCo'02), Bertinoro, Italy
    [+]Show Bib
    @article{bftc-lncs,
    author = {Jian Yin and Jean-Philippe Martin and Arun Venkataramani and Lorenzo Alvisi and Mike Dahlin},
    journal = {Lecture Notes in Computer Science (LNCS)},
    link = {http://www.cs.umass.edu/~arun/papers/bftc-lncs.pdf},
    pages = {51-56},
    title = {Towards a Practical Approach to Confidential Byzantine Fault-Tolerance},
    volume = {2584},
    year = {2003}
    }


  • TCP Nice: A Mechanism for Background Transfers
    Arun Venkataramani, Ravi Kokku, Mike Dahlin,
    USENIX OSDI, December 2002
    [+]Show Abstract
    Many distributed applications can make use of large background transfers--transfers of data that humans are not waiting for--to improve availability, reliability, latency or consistency. However, given the rapid fluctuations of available network bandwidth and changing resource costs due to technology trends, hand tuning the aggressiveness of background transfers risks (1) complicating applications, (2) being too aggressive and interfering with other applications, and (3) being too timid and not gaining the benefits of background transfers. Our goal is for the operating system to manage network resources in order to provide a simple abstraction of near zero-cost background transfers. Our system, TCP Nice, can provably bound the interference inflicted by background flows on foreground flows in a restricted network model. And our microbenchmarks and case study applications suggest that in practice it interferes little with foreground flows, reaps a large fraction of spare network bandwidth, and simplifies application construction and deployment. For example, in our prefetching case study application, aggressive prefetching improves demand performance by a factor of three when Nice manages resources; but the same prefetching hurts demand performance by a factor of six under standard network congestion control.
    [+]Show Bib
    @inproceedings{tcp-nice-osdi,
    author = {Arun Venkataramani and Ravi Kokku and Mike Dahlin},
    booktitle = {USENIX OSDI},
    link = {http://www.cs.umass.edu/~arun/papers/tcp-nice-osdi.pdf},
    month = {December},
    title = {TCP Nice: A Mechanism for Background Transfers},
    year = {2002}
    }


  • Operating System Support for Massive Replication
    Arun Venkataramani, Ravi Kokku, Mike Dahlin,
    Proceedings of the ACM SIGOPS European Workshop, Saint-Emilion, France, September 2002
    [+]Show Abstract
    This paper first argues that operating systems should provide support for massive replication of data and services. In particular, we argue that (1) technology trends favor "wasting " surprisingly large amounts of bandwidth and storage in order to improve availability or latency and (2) system support for massive replication is needed to realize these benefits because hand-tuning by engineers will not work
    [+]Show Bib
    @inproceedings{massive-replication,
    author = {Arun Venkataramani and Ravi Kokku and Mike Dahlin},
    booktitle = {Proceedings of the ACM SIGOPS European Workshop, Saint-Emilion, France},
    link = {http://www.cs.umass.edu/~arun/papers/massive-replication.pdf},
    month = {September},
    pages = {227-230},
    title = {Operating System Support for Massive Replication},
    year = {2002}
    }


  • Potential Costs and Benefits of Long-Term Prefetching for Content Distribution
    Arun Venkataramani, Praveen Yalagandula, Ravi Kokku, Sadiya Sharif, Mike Dahlin,
    Computer Communications Journal, 25(4), p. 367-375, July 2002,
    Fast-tracked from WCW'01.
    [+]Show Abstract
    In this paper, we examine the bandwidth-constrained placement problem, focusing on trade-offs appropriate for wide area network (WAN) environments. The goal is to place copies of objects at a collection of distributed caches to minimize expected access times from distributed clients to those objects subject to a maximum bandwidth constraint at each cache. We develop a simple algorithm to generate a bandwidth-constrained placement by hierarchically refining an initial per-cache greedy placement. We prove that this hierarchical algorithm generates a placement whose expected access time is within a constant factor of the optimal placement's expected access time. We then proceed to extend this algorithm to compute close to optimal placement strategies for dynamic environments.
    [+]Show Bib
    @inproceedings{ltp-wcw,
    author = {Arun Venkataramani and Praveen Yalagandula and Ravi Kokku and Sadiya Sharif and Mike Dahlin},
    booktitle = {Computer Communications Journal, 25(4), p. 367-375},
    link = {http://www.cs.umass.edu/~arun/papers/ltp-wcw.pdf},
    month = {July},
    title = {Potential Costs and Benefits of Long-Term Prefetching for Content Distribution},
    year = {2002}
    }


  • Byzantine fault-tolerant confidentiality
    Jian Yin, Jean-Philippe Martin, Arun Venkataramani, Lorenzo Alvisi, Mike Dahlin
    Journal : Proceedings of the International Workshop on Future Directions in Distributed Computing, Pages: 12-15,
    [+]Show Abstract
    This paper discusses how to use redundancy to simultaneously improve availability, integrity, and confidentiality. We call this problem Confidential BFT (CBFT) and propose the privacy firewall architecture to solve it. Service replicas connect to the privacy firewall and can send messages

  • Bandwidth Constrained Placement in a WAN
    Arun Venkataramani, Mike Dahlin, Phoebe Weidmann,
    ACM PODC, August 2001
    [+]Show Abstract
    In this paper, we examine the bandwidth-constrained placement problem, focusing on trade-offs appropriate for wide area network (WAN) environments. The goal is to place copies of objects at a collection of distributed caches to minimize expected access times from distributed clients to those objects subject to a maximum bandwidth constraint at each cache. We develop a simple algorithm to generate a bandwidth-constrained placement by hierarchically refining an initial per-cache greedy placement. We prove that this ...
    [+]Show Bib
    @inproceedings{bcp-podc,
    author = {Arun Venkataramani and Mike Dahlin and Phoebe Weidmann},
    booktitle = {ACM PODC},
    link = {http://www.cs.umass.edu/~arun/papers/bcp-podc.pdf},
    month = {August},
    title = {Bandwidth Constrained Placement in a WAN},
    year = {2001}
    }


  • Potential Costs and Benefits of Long-Term Prefetching for Content Distribution
    Arun Venkataramani, Praveen Yalagandula, Ravi Kokku, Sadiya Sharif, Mike Dahlin,
    Web Caching and Content Distribution Workshop (WCW), July 2001,
    Award paper invited for fast-track journal publication.
    [+]Show Abstract
    In this paper, we examine the bandwidth-constrained placement problem, focusing on trade-offs appropriate for wide area network (WAN) environments. The goal is to place copies of objects at a collection of distributed caches to minimize expected access times from distributed clients to those objects subject to a maximum bandwidth constraint at each cache. We develop a simple algorithm to generate a bandwidth-constrained placement by hierarchically refining an initial per-cache greedy placement. We prove that this hierarchical algorithm generates a placement whose expected access time is within a constant factor of the optimal placement's expected access time. We then proceed to extend this algorithm to compute close to optimal placement strategies for dynamic environments.
    [+]Show Bib
    @inproceedings{ltp-wcw,
    author = {Arun Venkataramani and Praveen Yalagandula and Ravi Kokku and Sadiya Sharif and Mike Dahlin},
    booktitle = {Web Caching and Content Distribution Workshop (WCW)},
    link = {http://www.cs.umass.edu/~arun/papers/ltp-wcw.pdf},
    month = {July},
    title = {Potential Costs and Benefits of Long-Term Prefetching for Content Distribution},
    year = {2001}
    }


  • Model Checking a Parameterized Directory-based Cache Coherence Protocol
    Allen E. Emerson, Arun Venkataramani,
    Unpublished manuscript, Computer Sciences, UT Austin, 2001
    [+]Show Abstract
    We present the mechanical verification of a parametrized distributed directory-based cacshe protocol, that had been defying naive attempts at mechanical verification for quite some time. The verification of the protocol was accomplished by applying a bunch of known techniques like symetry reduction, repetetive constructors and state abstraction. The clustering of the reachable state space was done manually, guided by counterexamples, and the original program rewritten so as to contain only 14 local client states as opposed to the original 288. These abstractions caused the reduction of the size of the global state space graph to only 1277 and resulted in the successful mechanical verification of the protocol.
    [+]Show Bib
    @misc{ccp,
    author = {Allen E. Emerson and Arun Venkataramani},
    booktitle = {Unpublished manuscript},
    institution = {Computer Sciences, UT Austin},
    link = {http://www.cs.umass.edu/~arun/papers/ccp.pdf},
    title = {Model Checking a Parameterized Directory-based Cache Coherence Protocol},
    year = {2001}
    }


  • Conformance Testing of Protocols Represented as Communicating Finite State Machines
    Arun Venkataramani,
    Undergraduate Thesis, Computer Science and Engineering, IIT Bombay, May 1999
    [+]Show Bib
    @ugthesis{ug-thesis,
    author = {Arun Venkataramani},
    institution = {Computer Science and Engineering, IIT Bombay},
    link = {http://www.cs.umass.edu/~arun/papers/ug-thesis.pdf},
    month = {May},
    title = {Conformance Testing of Protocols Represented as Communicating Finite State Machines},
    year = {1999}
    }