Sept. 26, 2014
3:00-4:00pm
|
Speaker: Mike Swift
University of Wisconsin-Madison
Host: Emery Berger
Title: New Interfaces to Storage-Class Memory
Storage-class memory (SCM) technologies such as phase-change memory,
spin-transfer torque MRAM, and memristers promise the performance and
flexibility of DRAM with the persistance of flash and disk. In this
talk, I will discuss two interfaces to persistent data stored in SCM.
First, I will talk about Mnemosyne, which is a system that exposes
storage-class memory directly to applications in the form of
persistent regions. With only minor hardware changes, Mnemosyne
supports consistent in-place updates to persistent data structures and
performance up to 10x faster than current storage systems.
Second, I will talk about how to build file systems for storage-class
memory. While standard storage device rely on the operating system
kernel for protected shared access, SCM can use virtual-memory
hardware to protect access from user-mode programs. This enables
application-specific customization of file system policies and
interfaces I will describe the design of the Aerie file system for
SCM, which provides flexible high-performance access to files.
Bio:
Mike Swift is an associate professor at the University of
Wisconsin, Madison. His research focuses on the hardware/operating
system boundary, including devices drivers, new processor/memory
technologies, and transactional memory. He grew up in Amherst,
Massachusetts and received a B.A. from Cornell University in
1992. After college, he worked at Microsoft in the Windows group,
where he implemented authentication and access control functionality
in Windows Cairo, Windows NT, and Windows 2000. He received a Ph.D. on
operating system reliability from the University of Washington in
2005.
|
May 12, 2014 |
Speaker: Milind Kulkarni
Purdue University
Host: Emery Berger
Title: Automatically enhancing locality in irregular applications
Over the past several decades of compiler research, there have been great successes in automatically enhancing locality for regular programs, which operate over dense matrices and arrays. Tackling locality in irregular programs, which operate over pointer-based data structures such as trees and graphs, has been much harder, and has mostly been left to ad hoc, application specific methods.
In this talk, I will describe efforts by my group to automatically improve locality in a particular class of irregular applications, those that traverse trees. The key insight behind our approach is an abstraction of data structure traversals as operations on vectors. This abstraction lets us design transformations, predict their behavior and determine their correctness. I will present two specific transformations we are developing, point blocking and traversal splicing, and show that they can deliver substantial performance improvements when applied to several real-world irregular kernels.
Bio:
Milind Kulkarni is an assistant professor in the School of Electrical and Computer Engineering at Purdue University. His research focuses on developing languages, compilers and systems that can efficiently and effectively exploit locality and parallelism in complex applications on complex computation platforms. Before joining Purdue, he was a postdoctoral research associate at the University of Texas at Austin from May 2008 to August 2009. He received his Ph.D. in Computer Science from Cornell University in 2008, where he was a Department of Energy High Performance Computer Science (HPCS) Fellow. Prior to that, he received his M.S. in Computer Science from Cornell University in 2005, and BS degrees in Computer Science and Computer Engineering from North Carolina State University in 2002. He received the NSF CAREER award in 2012 and the Department of Energy Early Career Research Award in 2013 for his work on optimizing irregular applications. He is a member of the ACM and the IEEE Computer Society.
|
February 10, 2014 |
Speaker: Claire Le Goues
Carnegie-Mellon University
Host: Yuriy Brun
Title: Automatic Program Repair Using Genetic Programming
"Everyday, almost 300 bugs appear...far too many for only the Mozilla
programmers to handle" --Mozilla developer, 2005
Software quality is a pernicious problem. Although 40 years of
software engineering research has provided developers considerable
debugging support, actual bug repair remains a predominantly manual,
and thus expensive and time-consuming, process. I will describe
GenProg, a technique that uses meta heuristic search strategies to
automatically fix software bugs using only a program's source code and
existing test suite. My empirical evidence demonstrates that GenProg
can quickly and cheaply fix a large proportion of real-world bugs in
large open-source C programs. I will also briefly discuss the
atypical search space of the automatic program repair problem, and the
ways it has challenged assumptions about software defects and how to
fix them.
Bio:
Claire Le Goues is an assistant professor of computer science at
Carnegie Mellon, where she is primarily affiliated with the Institute
for Software Research. Her research interests lie in the intersection
of software engineering and programming languages, with a particular
focus on software quality, testing, and automated error repair. She
received her PhD and MS degrees from the University of Virginia in
2013 and 2009 respectively, and her BA in Computer Science from
Harvard College in 2006. Her work on automatic program repair, the
subject of this talk, won Gold and Bronze designations at the 2009 and
2012 ACM SIGEVO "Humies" awards for Human-Competitive Results Produced
by Genetic and Evolutionary Computation, an ACM SIGSOFT Distinguished
Paper Award at the 2009 International Conference on Software
Engineering, and a featured article designation in the Jan/Feb 2012
issue of the IEEE Transactions on Software Engineering.
|
November 4, 2013 |
Speaker: Nenad Medvidovic
University of Southern California
Host: Yuriy Brun
Title: Architectural Decay in Software Systems: Symptoms, Causes, and Remedies
Engineers frequently neglect to carefully consider the impact of their
changes to a software system. As a result, the software system's
architectural design eventually deviates from the original designers' intent
and degrades through unplanned introduction of new and/or invalidation of
existing design decisions. Architectural decay increases the cost of making
new modifications and decreases a system's reliability, until engineers are
no longer able to effectively evolve the system. At that point, the system's
actual architecture may have to be recovered from the implementation
artifacts, but this is a time-consuming and error-prone process, and leaves
critical issues unresolved: the problems caused by architectural decay will
likely be obfuscated by the system's many elements and their
interrelationships, thus risking further decay. In this talk I will focus on
pinpointing locations in a software system's architecture that reflect
architectural decay. I will discuss the reasons why that decay occurs.
Specifically, I will present an emerging catalog of commonly occurring
symptoms of decay -- architectural "smells". I will illustrate the
occurrence of smells identified in the process of recovering the
architectures of several real-world systems. Finally, I will provide a
comparative analysis of a number of automated techniques that aim to recover
a system's architectural design from the system's implementation.
Bio:
Nenad Medvidović is a Professor and Associate Chair for Ph.D. Affairs in the
Computer Science Department at the University of Southern California.
Between 2009 and 2013 Medvidović served as Director of the USC Center for
Systems and Software Engineering (CSSE). He was the Program Co-Chair of the
2011 International Conference on Software Engineering (ICSE 2011).
Medvidović received his Ph.D. in 1999 from the Department of Information and
Computer Science at UC Irvine. He is a recipient of the National Science
Foundation CAREER (2000) award, the Okawa Foundation Research Grant (2005),
the IBM Real-Time Innovation Award (2007), and the USC Mellon Mentoring
Award (2010). He is a co-author of the ICSE 1998 paper titled
"Architecture-Based Runtime Software Evolution", which was recognized as
that conference's Most Influential Paper. Medvidović's research interests
are in the area of architecture-based software development. His work focuses
on software architecture modeling and analysis; middleware facilities for
architectural implementation; domain-specific architectures; architectural
styles; and architecture-level support for software development in highly
distributed, mobile, resource constrained, and embedded computing
environments. He is a co-author of a textbook on software architectures.
Medvidović is a member of ACM, ACM SIGSOFT, IEEE, and IEEE Computer Society.
|
October 21, 2013 |
Speaker: Veselin Raychev
ETH-Zurich
Host: Emery Berger
Title: EventRacer: Finding Concurrency Errors in Web Sites
Like shared-memory multi-threaded programs, event-driven programs such as client-side web applications are susceptible to data races that are hard to reproduce and debug. Yet, these races may cause serious damage (e.g. JavaScript crashes, lost e-mails, broken UI).
Building a concurrency detector which can find harmful races in this setting is particularly challenging due to: i) heavy use of ad-hoc synchronization leading to an overwhelming number of false positives, ii) complex interaction between a large number of events which can render current analyzers impractical, and iii) a need to precisely capture the happens-before relation which is assembled from a diverse set of sources.
In this talk, I will present EventRacer, a dynamic race detector that addresses these challenges and finds real bugs in web applications. We focus on the key points that made EventRaces possible.
We first present a scalable algorithm that uses graph connectivity based on chain-decomposition to find races in long executions. This algorithm significantly outperforms existing state-of-the-art detectors, in both space and time.
We then define and show how to find uncovered races -- a special class of races that are not affected by user-written ad-hoc synchronization. Uncovered races are key to reducing the number of false positives reported by the tool.
We finally present an evaluation of our approach on a set of widely used websites, demonstrate that harmful races are widespread, and show how they could negatively affect user experience.
The full source, binary distributions and an online interface are available here:
http://www.eventracer.org/
Bio:
Veselin Raychev is a PhD student at ETH Zurich, where he has been working on concurrency analysis and program synthesis. His interests are in: program analysis, machine learning and algorithms. Before joining ETH, he was a Senior Software Engineer at Google developing algorithms for transit directions in Google Maps. He obtained his M.Sc. from Sofia University in 2009.
|
September 30, 2013 |
Speaker: Arjun Guha
UMass Amherst
Faculty Research Seminar: 12:30pm
Title: Programming Languages meets Programmable Networks
Software-defined networking (SDN) is an architecture for building networks that has quickly grown in popularity in recent years. SDN promises to make networks cheaper to build and maintain, easy to configure and upgrade, and even promises "network programmability" and new kinds of "network applications" have not been practical before.
In this talk, we introduce SDN and see that it makes network programming possible, but it does not make it easy. SDN APIs, such as OpenFlow, have several low-level quirks that confound programmers and SDN has some fundamental, architectural properties that make it challenging to program and reason about software defined networks.
Nevertheless, it is possible to build a mathematical model of SDN and even prove properties of SDN control programs. We present the design and implementation of the first verified SDN controller, based on a detailed operational model of OpenFlow, which is the most popular SDN API. During the course of verification, we found several bugs in existing SDN tools.
Instead of verifying a single control program, we build a verified compiler and runtime system for a high-level "SDN programming language" called NetKAT. NetKAT has an intuitive semantics, which makes programming easy, and a sound and complete equational theory, which makes it possible to verify properties of NetKAT programs. We present an in-progress verification tool, based on the Z3 Theorem Prover, that can statically check certain key properties of simple NetKAT programs.
Finally, we present an SDN control program, called PANE, which allows users, applications, and devices to re-configure the network for themselves. By giving users the ability to configure the network, they can work with the network, rather than around it, to achieve better performance, security, or predictability. The key challenge is to safely compose competing and conflicting requests by mutually-untrusting principals. To do so, we employ a capability- like authorization mechanism and several ideas from NetKAT. We demonstrate the usefulness and feasibility of PANE by using it to enhance four third-party applications, deployed on an OpenFlow testbed.
|
September 23, 2013 |
Speaker: Niko Matsakis
Mozilla Research
Host: Emery Berger
Title: Guaranteeing Memory Safety in Rust
Rust is a new programming language targeting systems-level applications. Rust offers a similar level of control over performance to C++, but guarantees type soundness, memory safety, and data-race freedom. One of Rust's distinguishing features is that, like C++, it supports stack allocation and does not require the use of a garbage collector.
This talk covers the basics of Rust. We show how Rust's type system guarantees memory safety, and how these same techniques can be generalized to provide data-race freedom.
Bio:
Nicholas Matsakis is a senior researcher at Mozilla research. He focuses on safe support for parallelism in programming languages. He is currently working on the Rust programming language as well as Parallel JavaScript.
|
April 29, 2013 |
Speaker: Robert Grimm
NYU
Host: Emery Berger
Title: SuperC: Parsing All of C by Taming the Preprocessor
C tools, such as source browsers, bug finders, and automated
refactorings, need to process two languages: C itself and the
preprocessor. The latter improves expressivity through file includes,
macros, and static conditionals. But it operates only on tokens,
making it hard to even parse both languages.
This talk presents a
complete, performant solution to this problem. First, a
configuration-preserving preprocessor resolves includes and macros yet
leaves static conditionals intact, thus preserving a program's
variability. To ensure completeness, we analyze all interactions
between preprocessor features and identify techniques for correctly
handling them. Second, a configuration-preserving parser generates a
well-formed AST with static choice nodes for conditionals. It forks
new subparsers when encountering static conditionals and merges them
again after the conditionals. To ensure performance, we present a
simple algorithm for table-driven Fork-Merge LR parsing and four novel
optimizations. We demonstrate the effectiveness of our approach on
the x86 Linux kernel.
This is joint work with Paul Gazzillo. The corresponding paper
is online at http://cs.nyu.edu/rgrimm/papers/pldi12.pdf.
Bio:
Robert Grimm is an Associate Professor in the Department of Computer
Science at New York University. He graduated with a Ph.D. in Computer
Science from the University of Washington at Seattle in 2002.
Professor Grimm's research explores how to leverage programming
language technologies for making complex systems easier to build,
maintain, and extend. His recent work focuses on multilingual and on
streaming systems. Professor Grimm's honors include an NSF Career
Award, a Junior Fellowship at NYU's Center for Teaching Excellence,
and the Best Paper Award at the 6th ACM International Conference on
Distributed Event-Based Systems.
|
April 22, 2013 |
Speaker: Ramesh Sitaraman
University of Massachusetts
Title: The Billion Dollar Question in Online Videos: How Video Performance Impacts Viewer Behavior?
Online video is the killer application of the Internet. It is predicted that more than 85% of the consumer traffic on the Internet will be video-related by 2016. Yet, the future economic viability of online video rests squarely on our ability to understand how viewers interact with video content. For instance:
- If a video fails to start up quickly, would the viewer abandon?
- If a video freezes in the middle, would the viewer watch fewer minutes?
- If videos fail to load, is the viewer less likely to return to the same site?
In this talk, we outline scientific answers to these and other such questions, establishing for the first time a causal link between video performance and viewer behavior. One of the largest such studies, our work analyzes the video viewing habits of over 6.7 million viewers who in aggregate watched almost 26 million videos. To go beyond mere correlation and to establish causality, we develop a novel technique based on Quasi-Experimental Designs (QEDs). While QEDs are well known in the medical and computational social science, our work represents its first use in network performance research and is of independent interest.
This talk is of general interest and is accessible to a broad audience.
Bio:
Ramesh K. Sitaraman received his B. Tech. in electrical engineering from the Indian Institute of Technology, Madras. He obtained his Ph.D. in computer science from Princeton University. Prof. Sitaraman is currently a faculty member in the Computer Science Department at the University of Massachusetts at Amherst, where he is part of the Theoretical Computer Science group.
On a leave from academia, as a principal architect, Prof. Sitaraman helped build Akamai Technologies and helped pioneer the Internet-scale distributed networks that currently deliver much of the world’s web content, streaming videos, and online applications. He was named an Akamai Fellow.
Prof. Sitaraman's research focuses on foundational issues in the design of large Internet-scale distributed systems, communication networks, cloud computing, and global Internet services. Prof. Sitaraman is a recipient of an NSF CAREER Award and a Lilly Fellowship. He has served on numerous program committees and editorial boards of major conferences and journals.
|
April 8, 2013 |
Speaker: Rick Hudson
Intel Corporation
Host: Emery Berger
Title: River Trail: Adding Data Parallelism to JavaScript
Parallel hardware is today's reality and language extensions that ease exploiting its promised performance flourish. For most mainstream languages, one or more tailored solutions exist that address the specific needs of the language to access parallel hardware. Yet, one widely used language is still stuck in the sequential past: JavaScript, the lingua franca of the web.
Solutions used in other languages do not transfer well to the world of JavaScript due to differences in programming models, the additional requirements of the web like safety and security, and developer expectations. To address this we created River Trail, a new parallel programming API designed specifically for JavaScript. This talk will describe River Trail and its vision of how to leverage current and future parallel hardware from within the browser and JavaScript.
Bio:
Richard L. Hudson is best known for his work in memory management including the invention of the Train Algorithm, the Sapphire Algorithm, the Mississippi Delta Algorithm, and leveraging transactional memory to enable concurrent garbage collection. He pioneered the use of stack maps which enables accurate garbage collection in statically typed languages like Java. He worked on transactional memory and was a driving force that led to the articulation of the x86 memory model. For the past 2+ years, Richard has worked on the River Trail team researching the concurrent programming models needed to implement a more visual and immersive web experience.
Richard joined Intel in 1998 where he has worked on programming language runtimes, memory management, concurrency, synchronization, memory models, and programming model issues. He went to Shortridge, holds a B.A. degree from Hampshire College, and an M.S. degree from the University of Massachusetts.
|
December 10, 2012 |
Speaker: Kevin Walsh
Holy Cross
Host: Emery Berger
Title: Authorization and Trust: Lessons from the Nexus Project
We have little reason to trust computer systems. We do not know what
software lurks on the other side of a network connection, or even what
software runs on our own machines. We have few means to specify or
reason about why we might trust a piece of software. And we do not
have adequate authorization mechanisms in place to limit the damage
that rogue software can inflict.
In this talk, I will revisit the familiar problem of authorization,
focusing on how formal authorization logic might be leveraged to build
more trustworthy systems. Lessons will be drawn from our experience
using such logic to implement a file-system and some applications for
Nexus, an experimental operating system that relies on a trusted
platform module (TPM) hardware co-processor as a secure root of trust.
Bio:
Kevin Walsh is an Assistant Professor in the Department of Mathematics
and Computer Science at the College of the Holy Cross. He received his
Ph.D. in 2012 from Cornell University; prior to his graduate studies
he held a position as a visiting researcher at Duke University. His
research spans authorization logics, trustworthy applications and
operating systems, device driver isolation, peer-to-peer and wireless
sensor networks, and network simulation and emulation.
|
October 1, 2012 |
Speaker: Ben Shneiderman
University of Maryland — College Park
Host: Lee Osterweil
Title: Information Visualization for Medical Informatics
Effective medical care depends on well-designed user interfaces that enable users to benefit from the increasing abundance of information that supports decision making.
Novel strategies in information visualization allow clinicians and medical researchers to
explore in systematic yet flexible ways, so as to derive insights and make discoveries.
This talk begins with commercial success stories such as www.spotfire.com,
www.smartmoney.com/marketmap and www.hivegroup.com and explores their
application to medical informatics. Then we look at research tools for electronic health
records to find specified event sequences ( www.cs.umd.edu/hcil/lifelines2) and to view compact summaries of millions of patient histories ( www.cs.umd.edu/hcil/lifeflow).
Demonstrations also cover visual interfaces to support clinicians in understanding patient
status, doing medication reconciliation, and tracking medical lab tests ( www.cs.umd.edu/hcil/sharp).
Bio:
Ben Shneiderman is a Professor in the Department
of Computer Science and Founding Director (1983-2000) of the Human-Computer
Interaction Laboratory (http://www.cs.umd.edu/hcil/) at the University of Maryland. He
is a Fellow of the AAAS, ACM, and IEEE, and a Member of the National Academy of
Engineering.
Prof. Shneiderman is the co-author with Catherine Plaisant of Designing the User
Interface: Strategies for Effective Human-Computer Interaction (5th ed., 2010). With Stu Card and Jock Mackinlay, he co-authored Readings in
Information Visualization: Using Vision to Think (1999). His book Leonardo’s Laptop
appeared in October 2002 (MIT Press) and won the IEEE book award for Distinguished
Literary Contribution. His latest book, with Derek Hansen and Marc Smith, is Analyzing
Social Media Networks with NodeXL (2010).
|
September 17, 2012 |
Speaker: Alexandra Meliou
UMass Amherst
Title: Reverse Engineering Data Transformations
This talk will provide a general and accessible overview of my recent work on Reverse Data Management. The vision of my research is to make data and its history (provenance) fully explorable, verifiable, and reversible. Today, increasingly more data is derived from other data, emphasizing the need to reverse engineer these derivations. For example, while today scientists who use previously cleaned data are forced to trust the cleaning process, I envision automatically reverse-engineering how data cleaning affects the results. Making such information part of the analysis process, and automating its extraction, can revolutionize not only data cleaning applications, but also financial data mining, data analysis debugging, and performance anomaly cause detection.
I will discuss applications of causality in data analysis and business intelligence tasks, and demonstrate the Tiresias system, which extends database functionality to handle strategy planning queries over large datasets. My talk will highlight future directions and potential synthesis projects.
Bio:
Alexandra Meliou is an Assistant Professor and the Department of Computer Science, at the University of Massachusetts, Amherst. She has held this position since September 2012. Prior to that, she was a Post-Doctoral Research Associate at the University of Washington, working with Dan Suciu. Alexandra received her Ph.D and M.S. degrees from the Electrical Engineering and Computer Sciences Department at the University of California, Berkeley, in 2009 and 2005 respectively. She is a 2008 Siebel Scholar, and her research interests are in the area of data and information management, with a current emphasis on provenance, causality, and reverse data management.
|
February 13, 2012 |
Speaker: Stephen Freund
Williams College
Host: Emery Berger
Title: Cooperative Concurrency for a Multicore World
Multithreaded programs are notoriously prone to unintended
interference between concurrent threads. To address this problem, we
argue that yield annotations in the source code should document all
thread interference, and we present a type system for verifying the
absence of undocumented interference. Well-typed programs behave as
if context switches occur only at yield annotations. Thus, they can
be understood using intuitive sequential reasoning, except where yield
annotations remind the programmer to account for thread interference.
Experimental results show that our type system for yield annotations
is more precise than prior techniques based on method-level atomicity,
reducing the number of interference points by an order of magnitude.
The type system is also more precise than prior methods targeting race
freedom. In addition, yield annotations highlight all known
concurrency defects in our benchmark suite.
This is joint work with Cormac Flanagan, Jaeheon Yi, Caitlin Sadowski
at UC Santa Cruz.
Bio:
Stephen Freund is an Associate Professor and Chair of the Computer
Science Department at Williams College. His current research focuses
on light-weight analyses to identify defects in concurrent software,
such as race conditions, atomicity errors, and specification
violations. Prior to joining Williams in 2002, Steve worked at the
Compaq Systems Research Center on various programmer productivity
tools. He received his PhD from Stanford University in 2000.
|
November 22, 2011 |
Speaker: Ben Livshits
Microsoft Research
Host: Emery Berger
Title: Finding Malware on a Web Scale
Over the last several years, JavaScript malware has emerged as one of the most popular ways to deliver drive-by attacks to unsuspecting users through the browser. This talk covers recent Microsoft Research experiences with finding malware on the web. It highlights two tools: Nozzle and Zozzle. Nozzle is a runtime malware detector that focuses on finding heap spraying attacks. Zozzle is a mostly static detector that finds heap sprays and other types of JavaScript malware. Both are extremely precise: Nozzle false positive rate is close to one in a billion; Zozzle's is about one in a million.
Both are deployed by Bing and are used daily to find thousands of malicious web sites. This talk will focus on interesting interplay between static and runtime analysis and cover what it takes to migrate research ideas into real-world products.
Nozzle is deployed at Bing and has been finding thousands of malware sites daily for months now. Initial estimates show that with Zozzle, the team can go even further in detecting malicious sites. The focus of Zozzle is on providing a very fast and accurate scanner for detecting malware in JavaScript code. Zozzle is able to process a megabyte of code in a second and finds malware with a false positive rate of one in a million JavaScript documents.
Bio:
Ben Livshits is a researcher at Microsoft Research in Redmond, WA and an affiliate professor at the University of Washington. Originally from St. Petersburg, Russia, he received a bachelor's degree in Computer Science and Math from Cornell University in 1999, and his M.S. and Ph.D. in Computer Science from Stanford University in 2002 and 2006, respectively. Dr. Livshits' research interests include application of sophisticated static and dynamic analysis techniques to finding errors in programs.
Ben has published papers at PLDI, Oakland Security, Usenix Security, CCS, SOSP, ICSE, FSE, and many other venues. He is known for his work in software reliability and especially tools to improve software security, with a primary focus on approaches to finding buffer overruns in C programs and a variety of security vulnerabilities (cross-site scripting, SQL injections, etc.) in Web-based applications. He is the author of several dozen academic papers and patents. Lately he has been focusing on how Web 2.0 application and browser reliability, performance, and security can be improved through a combination of static and runtime techniques. Ben generally does not speak of himself in the third person.
|
November 21, 2011 |
Speaker: Gene Cooperman
Northeastern University
Host: Emery Berger
Title: Temporal Debugging via Flexible Checkpointing: Changing the Cost Model
Debugging run-time errors remains one of the most time-consuming, and sometimes frustrating, efforts in developing and maintaining programs. A run-time error is uncovered, and the programmer then begins multiple iterations within a debugger in order to build up a hypothesis about the original program fault that caused the error. Examples of run-time errors include segmentation fault, assertion failure, infinite loop, deadlock, livelock, and missing synchronization locks.
This talk describes a debugging approach based on a reversible debugger, sometimes known as a time-traveling debugger. This is a more natural approach, since it allows a programmer during a single program run to work backwards from run-time error to earlier fault, and still earlier to the original causal fault. A new tool, reverse expression watchpoints, allows one to begin with a program error and an expression that has an incorrect value, and automatically bring the programmer backwards in time to a point at which the expression first took on an incorrect value. This tool is part of a long-range project in which a series of such tools is planned --- each tool customized for a different class of run-time errors.
The long-term goals described here are motivated by an analogy between syntax errors and run-time errors:
- Currently, syntax errors are easily diagnosed by compilers that bring the programmer directly to the line number, within a textual program, that led to the bad syntax.
- In the future, run-time errors will be easily diagnosed by a new class of reversible debugger tools that bring the programmer directly to the point in time, within a familiar debugging environment, that led to the later run-time error.
The reversible debugger is itself based on a fast, transparent checkpointing package for Linux: DMTCP (Distributed MultiThreaded CheckPointing). DMTCP can checkpoint such varied programs as Matlab, OpenMPI, MySQL, Python, Perl, GNU screen, Vim, Emacs, and most user-developed programs, regardless of the implementation language. No kernel modification or other root privilege is needed. Of particular interest for this talk is the ability of a customized version of DMTCP to checkpoint an entire gdb session. The reversible debugger also supports weak determinism for purposes of debugging multi-threaded programs. The current implementation has been demonstrated robust enough to run such large, real-world programs as MySQL and Firefox.
Bio:
Gene Cooperman received his Ph.D. from Brown University in 1978. He spent two years as a post-doc, followed by six years at GTE Laboratories. He has been a professor at Northeastern University since 1986, and a full professor since 1992. His interests lie in high performance computation and symbolic algebra. The first interest is currently focussed on DMTCP, a robust transparent checkpointing package that does not require any modifications to the application or kernel/run-time library. A combination of the two interests has led to his joint work on the Roomy language extension to translate traditional RAM-intensive computations into scalable computations based on parallel disks. A variation for remote RAM and supercomputing is being developed for the Madness software at Oak Ridge National Laboratory. He also has a decade-long relationship with CERN, where he supports semi-automatic thread parallelization of task-oriented software, such as Geant4 at CERN. He leads the High Performance Computing Laboratory at Northeastern University, where he currently advises four PhD students. He has over 80 refereed publications.
|
November 7, 2011 |
Speaker: Sriram Rao
Yahoo! Research
Host: Prashant Shenoy (sponsored by Yahoo!)
Title: I-files: Handling Intermediate Data In Parallel Dataflow Graphs
Over the past few years parallel dataflow graph frameworks (such as MapReduce, Hadoop, Dryad) have enabled data intensive computing on clusters built from commodity hardware. A key component in a parallel dataflow graph computation is the intermediate data that flows between various computation stages. This data is generated during the computation and, in general, has to be moved across machines in the cluster involving network I/O as well as disk I/O. At large volumes (viz., 10's to 100's of terabytes of data), unless careful attention is paid to disk overheads involved in dealing with intermediate data, cluster throughput will degrade.
In this talk, we describe a new approach to handling intermediate data at scale. We find that managing large volumes of intermediate data requires novel batching mechanisms to reduce disk subsystem overheads. Our approach is to build filesystem support specifically for storing intermediate data. We design an atomic record append primitive that enables concurrent writers to append to a file in a lock-free manner: multiple writers append to a block and multiple blocks of a file can be appended to concurrently. We denote files constructed via atomic append I-files. I-file blocks are written to sequentially and are read back mostly sequentially. We have developed an implementation of I-files and used it as the foundation for Sailfish, a MapReduce framework we built by modifying Hadoop. We have also used Sailfish to run unmodified Hadoop MapReduce jobs that process production datasets. Our results show that for transporting intermediate data at scale, Sailfish can outperform Hadoop by at least a factor of 2.
Bio:
Sriram is a member of the Cloud Sciences group at Yahoo! Labs. His interests are in building distributed storage systems that enable high performance compute services for processing massive datasets. At Yahoo!, Sriram leads the Sailfish project whose goal is to enable large-scale analytics on big data. Prior to Yahoo!, Sriram led the design and implementation of KFS (Kosmos Filesystem), an open-source filesystem project. KFS is deployed in production settings where it is used to manage multiple-PB's of storage.
|
October 3, 2011 |
Speaker: Peter Sweeney
IBM TJ Watson
Host: Emery Berger
Title: The State of Experimental Evaluation of Software and Systems in Computer Science
As hardware and software continues to evolve into increasingly complex systems, our ability to understand their behavior and measure their performance is increasingly difficult.
Nevertheless, many areas of computer science use experiments to identify performance bottlenecks and to evaluate innovations. In the last few years, researchers have identified some disturbing flaws in the way that experiments are performed in computer science.
This talk presents two of these flaws.
First, changing a seemingly innocuous aspect of an experimental setup can result in a systems researcher drawing wrong conclusions from an experiment. What appears to be an innocuous aspect in the experimental setup may in fact introduce a significant bias in an evaluation of native (C and C++) applications.
Second, performance analysts profile their programs to find methods that are worth optimizing: the “hot” methods; however, four commonly used Java profilers (xprof , hprof , jprofile, and yourkit) often disagree on the identity of the hot methods. This talk demonstrates that these profilers all violate a fundamental requirement for sampling based profilers: to be correct, a sampling-based profiler must collect samples randomly.
Unfortunately, the flaws discussed above are not the full extent of the problem. If computer science is to be taken seriously as a scientific discipline, we as a community need to do a better job designing experiments and evaluating their results.
I will discuss some current efforts being made by the community to improve experimental evaluation in computer science.
Bio:
Peter F. Sweeney is a Research Staff Member in the Program Technology Department at the IBM T.J. Watson Research Center in Hawthorne, New York. His current research interests are performance analysis and tuning of computer systems with a focus on automation. In the past, he has focused on object-oriented optimization. Peter received a Master's degree in Computer Science from Columbia University SEAS and he joined IBM Research in 1985. Peter is a senior member of ACM and a co-author of the paper "Adaptive optimization in the Jalapeno JVM", which received the 2010 ACM SIGPLAN most influential OOPSLA 2000 paper award.
|
September 19, 2011 |
Speakers: Christophe Diot and Renata Teixeira
Technicolor / CNRS and UPMC Sorbonne Universités
Host: Jim Kurose
Titles: Challenges in digital services delivery | Performance of Networked Applications
Challenges in digital services delivery: the cloud vs. the crowd (Diot)
The universal answer to home service delivery these days seems to be "the cloud", even though nobody really agrees on what "cloud services" means. In order to bring some transparency to the Cloud, we identify what are the challenges in digital home services delivery, discuss the strengths and limitations of a pure cloud approach, and finally propose an hybrid solution relying both on data centers and home devices to better serve home users. We discuss the research and technology challenges that have to be solved to deploy this digital service delivery architecture.
Performance of Networked Applications: The Challenges in Capturing the User’s Perception (Teixeira)
There is much interest recently in doing automated performance diagnosis on user laptops or desktops. One interesting aspect of performance diagnosis that has received little attention is the user perspective on performance. To conduct research on both end-host performance diagnosis and user perception of network and application performance, we designed an end-host data collection tool, called HostView. HostView not only collects network, application and machine level data, but also gathers feedback directly from users. User feedback is obtained via two mechanisms, a system-triggered questionnaire and a user-triggered feedback form, that for example asks users to rate the performance of their network and applications. This talk will describe our experience with the first deployment of HostView. Using data from 40 users, we illustrate the diversity of our users, articulate the challenges in this line of research, and report on initial findings in correlating user data to system-level data. This is joint work with Diana Joumblatt, Jaideep Chandrashekar, and Nina Taft.
|
Thursday, September 15, 2011 |
Speaker: Jeff Chase
Duke University
Host: Prashant Shenoy
Title: Trust in the Federation: Authorization for Multi-Domain Clouds
A multi-domain cloud combines virtual infrastructure from multiple providers to create a powerful platform for networked services, computation, and experimental systems research. NSF's GENI initiative (Global Environment for Network Innovation) is a key example of a multi-domain infrastructure-as-a-service (IaaS) system: it generalizes IaaS cloud computing to incorporate diverse virtual infrastructures, noncommercial providers, configurable network connectivity, and software-defined networking.
One lesson we can draw from the GENI experience is that many of the technical challenges for the GENI control framework ultimately reduce to issues of trust and authorization. In this talk, I will outline an emerging architecture based on declarative policy for federation trust structure and authorization for access to cloud resources. The approach uses libabac from ISI, an open-source implementation of an authorization logic called Attribute-Based Access Control (ABAC). I also address the question of how to incorporate software identities as subjects in the authorization framework. How do we know if we can trust applications and services running in the cloud? I discuss preliminary research on Trusted Platform Cloud, which uses attestations by cloud providers to infer trust for autonomous software instances.
Bio:
Jeff Chase is a Professor of Computer Science at Duke University and a Visiting Scientist at the Renaissance Computing Institute (RENCI). He has spent much of the last four years working on the GENI control framework as a Control Framework Working Group chair and as leader of the Open Resource Control Architecture (ORCA) project. He is co-chair of the 2011 ACM Symposium on Cloud Computing (SOCC).
|
April 27, 2011 |
Speaker: Dan Grossman
University of Washington
Host: Emery Berger
Title: Collaborating at the Hardware/Software Interface: A Programming-Languages Professor’s View
This presentation will describe four ongoing projects that are advised by my computer-architecture colleague Luis Ceze and that I am co-advising or contributing to. For each, I will point out what aprogramming-languages perspective has to offer and why it is useful to have students who can nimbly cross or blur the hardware/software divide. The projects — deterministic execution of multithreaded programs, code-centric specification of shared memory, language support for approximate low-power computing, and run-time errors for data races — address the key technology trends of ubiquitous parallelism and energy as a first-order concern.
Bio:
Dan Grossman is an Associate Professor in the Department of Computer Science & Engineering at the University of Washington where he has been a faculty member since 2003. Grossman completed his Ph.D. at Cornell University and his undergraduate studies at Rice University. His research interests lie in the area of programming languages, ranging from theory to design to implementation, with a focus on improving software quality. In recent years he has focused on better techniques for expressing multithreaded programs, particularly using languages with well-defined support for transactional memory. In prior work, he focused on type-safe systems programming using the Cyclone language, which he developed with colleagues.
Grossman has served on over fifteen conference and workshop program committees in addition to co-chairing the 2007 ACM SIGPLAN-SIGSOFT PASTE workshop and the 2009 ACM SIGPLAN TRANSACT workshop. He currently serves on the ACM SIGPLAN Executive Committee and the ACM Education Council. He is the recipient of an NSF Career Award and two "Teacher of the Year" Awards voted on by his department's undergraduates.
In his spare time, Dan can be found playing ice hockey (poorly), bicycling, hiking, or enjoying good food, beer, and theatre. Dan has never had a cavity.
|
|