Publications by Area

Transparently Improving Performance and Reliability

 

Randomized Runtime Systems for Reliability and Security

 

ASPLOS 2008:Archipelago: Trading Address Space for Reliability and Security

DieHard: Efficient Probabilistic Memory Safety (submitted for journal publication - UMass CS TR 07-17; supersedes the PLDI 06 paper)

PLDI 2007: Exterminator: Automatically Correcting Memory Errors with High Probability
with Gene Novark and Ben Zorn
A system that automatically corrects heap-based memory errors without programmer intervention. Exterminator exploits randomization and replication to pinpoint errors with high precision. Using this information, Exterminator derives runtime patches that fix these errors in current and subsequent executions.

PLDI 2006: DieHard: Probabilistic Memory Safety for Unsafe Languages
with Ben Zorn.
DieHard uses randomization and replication to transparently make C and C++ programs tolerate a wide range of errors, including buffer overflows and dangling pointers. Instead of crashing or running amok, DieHard lets programs continue to run correctly in the face of memory errors with high probability. Using DieHard also makes programs highly resistant to heap-based hacker attacks.
You can download DieHard.

 

Cooperative Memory Management (OS+GC)

 

OSDI 2006: CRAMM: Virtual Memory Support for Garbage-Collected Applications
with Ting Yang, Scott F. Kaplan, and J. Eliot B. Moss.
A virtual memory manager that, combined with a collector-neutral heap sizing algorithm, ensures that garbage-collected applications run as fast as possible while avoiding paging.

PLDI 2005: Garbage Collection without Paging
with Matthew Hertz and Yi Feng.
Introduces bookmarking collection, a GC algorithm that works with the virtual memory manager to eliminate paging. Just before memory is paged out, the collector "bookmarks" the targets of pointers from the pages. Using these bookmarks, BC can perform full garbage collections without loading the pages back from disk. By performing in-memory garbage collections, BC can speed up Java programs by orders of magnitude (up to 41X).Download the bookmarking collector and associated Linux patches.

ISMM 2004: Automatic Heap Sizing: Taking Real Memory into Account
with Ting Yang, Matthew Hertz, Scott Kaplan, and Eliot Moss.
A GC-independent approach that cooperates with an enhanced virtual memory manager to dynamically pick the best heap size while a program is running.Talk (PowerPoint). This work is subsumed by our OSDI 2006 paper, above.


Transparency for Contributory Applications

 

ACM Transactions on Storage: Contributing Storage Using the Transparent File System
with Jim Cipar and Mark Corner, to appear.
A file system that lets background applications use all of your available disk space, without impacting your use of your computer.
Supersedes the FAST 07 paper below.

FAST 2007: TFS: A Transparent File System for Contributory Storage
with James Cipar and Mark Corner
A file system that lets background applications use all of your available disk space, without impacting your use of your computer. Best Paper Award.

USENIX 2006: Transparent Contribution of Memory
with James Cipar and Mark Corner
Introduces transparent memory management (TMM), which lets you run background jobs that use your disk and virtual memory without impacting your use of the machine, even after you leave your machine unattended for an extended period of time. You can download TMM.

 

Efficient Memory Management

 

ASPLOS-IX: Hoard: A Scalable Memory Allocator for Multithreaded Applications
with Kathryn McKinley, Robert Blumofe, & Paul Wilson
Identifies problems of heap contention, space blowup, and allocator-induced false sharing in previous allocators; introduces Hoard, a fast memory allocator that solves these problems.
Note: Hoard's design has undergone a number of changes since the publication of this article.
Hoard home page.

MSP 2005: A Locality-Improving Dynamic Memory Allocator
with Yi Feng.
Presents Vam, a memory allocator that improves cache-level and virtual memory locality. Vam is distributed with Heap Layers.

OOPSLA 2004: MC2: High-Performance Garbage Collection for Memory-Constrained Environments
with Naren Sachindran and Eliot Moss.
MC2 is an incremental, space-efficient garbage collector that has high throughput and low pause times.

Simplifying Correct & Efficient Programming

 

Programming Languages

SenSys 2007: Eon: A Language and Runtime System for Perpetual Systems
with Jacob Sorber, Alex Kostadinov, Matt Brennan, Matt Garber, and Mark Corner
An energy-aware programming language that supports perpetual systems: these are systems that are intended to run forever and must harvest their energy from the environment.

Flux: Composing Efficient and Scalable Servers (submitted for journal publication; supersedes the USENIX 06 paper)

USENIX 2006: Flux: A Language for Programming High-Performance Servers
with Brendan Burns, Kevin Grimaldi, Alex Kostadinov, and Mark Corner.
Flux is a concise programming language for writing servers that scale and are deadlock-free. A Flux programmer uses off-the-shelf, sequential C and C++ code, and describes their composition; the Flux compiler then generates a deadlock-free, high-concurrency server. Flux also makes it easy to analyze and predict server performance, because the Flux compiler can also generate discrete event simulators that reflect actual server performance.
You can download Flux.

IJHPCA 2000 (Int'l Journal of High-Performance Computing Applications, Winter 2000): Compositional Development of Performance Models in POEMS
with J.C. Browne & A. Dube

 

Software Infrastructure

PLDI 2001: Composing High-Performance Memory Allocators
with Ben Zorn & Kathryn McKinley
Introduces Heap Layers, a flexible infrastructure for building memory allocators that leverages C++ template mixins to achieve high performance.
Heap Layers source.

 

 

Quantitative Memory Management Studies

 

OOPSLA 2005: Quantifying the Performance of Garbage Collection vs. Explicit Memory Management
with Matthew Hertz.
This paper attempts to answer an age-old question: is garbage collection faster/slower/the same speed as malloc/free? We introduce oracular memory management, an approach that lets us measure unaltered Java programs as if they used malloc and free. The result: a good GC can match the performance of a good allocator, but it takes 5X more space. If physical memory is tight, however, conventional garbage collectors suffer an order-of-magnitude performance penalty.

OOPSLA 2002: Reconsidering Custom Memory Allocation
with Ben Zorn & Kathryn McKinley
Finds that a good general-purpose allocator is better than all custom allocators except regions, but these have serious problems. Introduces reaps (regions + heaps), which combine the flexibility and space efficiency of heaps with the performance of regions.