James E. Kostas
I am a fourth-year PhD student working with Dr. Philip Thomas in the Autonomous Learning Lab
(ALL). My research interests lie at the intersection of reinforcement learning
(RL), stochastic neural networks, robotics, and deep learning.
My current research is particularly focused on "sim-to-real" transfer learning for RL with robotics applications. I am also studying the practical advantages of stochastic networks, optionally fused with traditional (deterministic) deep networks, for RL.
- James Kostas, Chris Nota, Philip Thomas. Asynchronous Coagent Networks. In Proceedings of the 37th International Conference on Machine Learning (ICML 2020).
- Philip Thomas, Scott Jordan, Yash Chandak, Chris Nota, James Kostas. Classical Policy Gradient: Preserving Bellman's Principle of Optimality. Technical Report, 2019.
- Yash Chandak, Georgios Theocharous, James Kostas, Scott Jordan, Philip Thomas. Learning Action Representations for Reinforcement Learning. In Proceedings of the 36th International Conference on Machine Learning (ICML 2019).
- Yash Chandak, Georgios Theocharous, James Kostas, Philip Thomas. Reinforcement Learning with a Dynamic Action Set. Continual Learning workshop at the Thirty-second Conference on Neural Information Processing Systems (NeurIPS 2018).