I am an Associate Professor and the director of the Safe, Confident, and Aligned Learning + Robotics Lab (SCALAR) in the College of Information and Computer Sciences at The University of Massachusetts Amherst. The goal of my research is to enable robots and other learning agents to be deployed in the real world with minimal expert intervention. Toward this goal, we develop efficient learning algorithms that enforce safety constraints, provide performance guarantees, and align human and agent objectives. Thus, our work draws from both machine learning and robotics, including topics such as imitation learning, reinforcement learning, manipulation, and human-robot interaction.


Specifically, I am interested in addressing the following questions:

1. How can human demonstrations and interactions be used to bootstrap the learning process?
Writing code is an extraordinarily labor-intensive way to provide agents with human knowledge and usually requires highly-trained specialists. Imitation learning is an alternate paradigm for gaining human insight though faster, more natural means such as task demonstrations and interactive corrections. However, such time-series data is often difficult to interpret, requiring the ability to segment activities and behaviors, understand context, and generalize from a small number of examples. How can demonstrations best be interpreted to leverage human insight into complex tasks? How can we ensure that inferred objectives are well-aligned with human desires? What kinds of demonstrations are most effective? How can robots take advantage of multiple types of cues like natural language, gestures, and gaze?

2. How can robots autonomously improve their understanding of the world through embodied interaction?
Human demonstrations and interactions can provide a good baseline of knowledge, but do not necessarily cater to a robot's specific internal representations, uncertainties, and capabilities. Ideally, robots should directly reason about these factors and autonomously collect data to improve modeling and control of their environment. How can techniques like active learning or interactive perception be used to experiment intelligently? How can reinforcement learning algorithms best utilize robot experiences? How can we utilize large data sets that already exist, such as the recorded experiences of other robots, or language and video data on the web?

3. How can agents learn from heterogenous, noisy interactions and still provide strong probabilistic guarantees of correctness and safety?
Learning from demonstration and interaction has seen much practical success, but often cannot provide strong performance guarantees. To be deployed in many real-world situations, such learners must be able to provide strong probabilistic guarantees of safe and correct performance, especially when working in proximity to humans. How can lifelong learning algorithms provide guarantees that a safety-critical task will be performed correctly, such as disposing of a hazardous material? Similarly, how can tasks like cleaning up a dinner table be continually optimized, while guaranteeing that no unsafe situations (such as collisions or spills) will occur with high probability? How can safety-aware algorithms be made sample efficient enough to work on real robotics problems?