You have found Clemens Rosenbaum's web site. I am a PhD candidate at the College of Information and Computer Sciences at the University of Massachusetts Amherst, where I am advised by Sridhar Mahadevan.
My interests span several machine learning disciplines, in particular deep learning and reinforcement learning.

About me and my interests

I am interested in a subfield of artificial intelligence called machine learning, the automated discovery of models to solve particular problems. My research focuses on aspects of commonsense reasoning, in particular interactive reasoning and transfer learning. I mostly employ deep learning and reinforcement learning techniques.

A very short CV:


A more detailed discription of what I do:

My main research interest is developing machine learning techniques for commonsense reasoning, mainly using a combination of reinforcement and deep learning techniques. While there is a wide range of problems subsumed by common sense reasoning, I focused on two. The first is dialog, where an agent has to learn how to interact with other agents in order to perform specific tasks. My work is probably best characterized by "Learning to Query, Reason, and Answer Questions on Ambiguous Texts". There, we build a dialog simulator that utters statements containing common pieces of ambiguity, such as transcription mistakes, requiring its conversation partner to reason on missing knowledge. In technical terms, this makes the interactions only partially observable. We additionally developed an extension that requires the agent to explain its actions, as explained in "e-QRAQ: A Multi-turn Reasoning Dataset and Simulator with Explanations".
The second problem I have been focusing on is transfer and multi-task learning. In multi-task learning the goal is to leverage similarities between different tasks to boost the learning system's capabilities of generalization, thereby improving performance on all tasks involved. If done successfully, this involves transfer, as the "right kind" of similarities can be picked on by the algorithm to maximize overall performance. Here, I believe that a solution can be found by using reinforcement learning as a meta-learner for deep architectures. In one particular application, this meant using a reinforcement learner to adaptively select different "function blocks" (which can be layers of neural networks) specific to each task and sample. The paper explaining this idea in detail is called "Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning".

Selected Papers

  • Clemens Rosenbaum, Tim Klinger, and Matthew Riemer "Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning" (ICLR 2018). openreview
  • Marlos C. Machado, Clemens Rosenbaum, Xiaoxiao Guo, Miao Liu, Gerald Tesauro, and Murray Campbell, "Eigenoption Discovery through the Deep Successor Representation" (ICLR 2018). openreview
  • Clemens Rosenbaum, Tian Gao, and Tim Klinger, "e-QRAQ: A Multi-turn Reasoning Dataset and Simulator with Explanations" (WHI@ICML 2017). arxiv
  • Xiaoxiao Guo, Tim Klinger, Clemens Rosenbaum, Joseph P. Bigus, Murray Campbell, B. Kawass, Kartik Talamadupula, and Gerald Tesauro. "Learning to query, reason and answer question on ambiguous texts." (ICLR 2017). openreview
  • Ishan P. Durugkar, Clemens Rosenbaum, Stefan Dernbach, Sridhar Mahadevan. "Deep Reinforcement Learning With Macro-Actions" (arxiv 2016). arxiv
  • Clemens Rosenbaum and Sridhar Mahadevan. "Boosting Gradient Algorithms for Multi-Agent Games" (LICMAS@NIPS 2015).


My preferred type of contact is via email at cgbr@cs.umass.edu.