Yuriy Brun

    Assistant Professor

College of Information and Computer Science

140 Governors Drive

University of Massachusetts

Amherst, MA 01003-9264 USA

Office: 346
Phone: +1-413-577-0233
Fax: +1-413-545-1249

curriculum vitae

Research interests

My research is in software engineering. I am interested in improving our ability to build systems that are smart, and self-adapt to their environment.
I co-direct the LASER and PLASMA laboratories.

Current projects

Perfume: Performance-aware model inference

Understanding the performance of systems, and performance-related bugs, is a daunting task. Perfume observes system executions and infers a compact model of the system that accounts for performance phenomena during inference, and includes performance information in the final model. These models help developers more effectively and quickly answer questions about software system performance. Read: Behavioral Resource-Aware Model Inference.
Collaborators: Tony Ohmann, Michael Herzberg, Sebastian Fiss, Armand Halbert, Marc Palyart, and Ivan Beschastnikh.

Automatic Program Repair

Bugs in software are incredibly costly, but so is software debugging. More bugs are reported daily than developers can handle, and even critical security bugs take, on average, 28 days to repair. Automatic program repair is an exciting approach, but can we trust automatically generated patches? This project evaluates the quality of such patches and compares them to human-written ones.
Collaborators: Ted Smith, Neal Holts, Claire Le Goues, Prem Devanbu, Stephanie Forrest, and Westley Weimer.

CSight: Inferring precise models of concurrent systems

Concurrent, networked, and other distributed systems are especially difficult to debug and understand. CSight observes system executions and infers a model that accurately describes possible system behavior and host interactions. CSight models improve developer understanding and ability to answer questions about distributed systems. Read: Inferring Models of Concurrent Systems from Logs of Their Behavior with CSight.
Collaborators: Ivan Beschastnikh, Jenny Abrahamson, Michael D. Ernst, and Arvind Krishnamurthy.

Speculative analysis

When developers make decisions, they often don't know the consequences of their potential actions. Wouldn't it be great if they did? Speculative analysis makes that possible by speculating what actions developers might perform, executing them in the background, and computing the consequences. For example, Crystal proactively detects textual, build, and test conflicts between collaborators. And Quick Fix Scout helps developers resolve compilation errors. Read: Proactive Detection of Collaboration Conflicts.
Collaborators: Kıvanç Muşlu, Reid Holmes, Michael D. Ernst, and David Notkin.

InvariMint: Specifying model inference algorithms declaratively

Model inference algorithms can be hard to understand and compare. To help, InvariMint specifies such algorithms using a single, common, declarative language. InvariMint makes it easy to understand, compare, and extend such algorithms, and efficiently implementation them as well. Read: Unifying FSM-Inference Algorithms through Declarative Specification.
Collaborators: Ivan Beschastnikh, Jenny Abrahamson, Michael D. Ernst, and Arvind Krishnamurthy.

Synoptic: Summarizing system logs with refinement

To help developers understand their systems, Synoptic observes the system execute and infers a compact, finite-state-machine model that describes the system's behavior. Looking as these models has led developers to identify bugs and anomalies quickly, empirically verify the correctness of their system, and understand behavior. Read: Leveraging Existing Instrumentation to Automatically Infer Invariant-Constrained Models.
Collaborators: Ivan Beschastnikh, Jenny Abrahamson, Kevin Thai, Tim Vega, Michael D. Ernst, Arvind Krishnamurthy, and Thomas E. Anderson.

Reliability through smart redundancy

One of the most common ways of achieving system reliability is through redundancy. But how can we ensure we are using the resources in a smart way? Can we guarantee that we are squeezing the most reliability possible out of our available resources? A new technique called smart redundancy says we can! Read: Smart redundancy for distributed computation.
Collaborators: George Edwards, Jae young Bang, and Nenad Medvidović.

Privacy preservation in distributed computation via sTile

My email, my taxes, my research, my medical records are all on the cloud. How do I distribute computation onto untrusted machines while making sure those machines cannot read my private data? sTile makes this possible by breaking up the computation into tiny pieces and distributing those pieces on a network in a way that makes is nearly impossible for an adversary controlling less than half of the network to reconstruct the data. Read: Keeping Data Private while Computing in the Cloud.
Collaborator: Nenad Medvidović.

Past projects

Bug cause analysis

When regression tests break, can the information about the latest changes help find the cause of the failure? The cause might be a bug in recently added code, or it could instead be in old code that wasn't exercised in a fault-revealing way until now. Considering the minimal test-failing sets of recent changes, and maximal test-passing sets of those changes can identify surprises about the failure's cause. Read: Understanding Regression Failures through Test-Passing and Test-Failing Code Changes.
Collaborators: Roykrong Sukkerd, Ivan Beschastnikh, Jochen Wuttke, and Sai Zhang.

DNA Self-Assembly

How do simple objects self-assemble to form complex structures? Answering that question can lead to understanding interactions of objects as simple as atoms and as complex as software systems. Studying mathematical models of molecular interactions and gaining control over nanoscale DNA structures, begins to understand this space. Read: (theory) Solving NP-Complete Problems in the Tile Assembly Model or (experimental) Self-Assembly of DNA Double-Double Crossover Complexes into High-Density, Doubly Connected, Planar Structures.
Collaborators: Dustin Reishus, Manoj Gopalkrishnan, Bilal Shaw, Nickolas Chelyapov, and Leonard Adleman.

The fault-invariant classifier

Can the wealth of knowledge and examples of bugs make it easier to discover unknown, latent bugs in new software? For example, if I have access to bugs, and bug fixes in Windows 7, can I find bugs in Windows 8 before it ships? Machine learning makes this possible for certain classes of often-repeated bugs. Read: Finding Latent Code Errors via Machine Learning Over Programs Executions.
Collaborator: Michael D. Ernst.

Other projects