Recent Changes - Search:

MLFL Home

University of Massachusetts

MLFL Wiki

Bridging Probabilistic Inference And Motion Planning With Markov Chain Monte Carlo

Abstract: Probabilistic models have been powerful tools in modeling problems in artificial intelligence. It encodes uncertain/unknown components into stochastic substructure that works with deterministic substructure in inference problems. Markov Chain Monte Carlo (MCMC) methods follow simple designs and efficiently generate random samples that approximate a target distribution. It can provide answers to the stochastic substructure in an inference problem, which inspires us to introduce MCMC to motion planning problems.

In this talk, I will describe informed sampling methods that use MCMC in solving optimal kinodynamic motion-planning problems. Our proposed MCMC approach efficiently samples from an informed set, especially when the dimension is high and the volume of informed set gradually decreases. I will also describe how we introduce MCMC methods by directly sampling over “unknown” Pareto fronts in trajectory spaces. The sampled trajectories gradually converge to Pareto optimal solutions of a multi-objective motion planning problem.

Bio: Daqing Yi is a postdoctoral researcher working with Prof. Siddhartha Srinivasa in the Personal Robotics Lab at the University of Washington. He received his Ph.D. in Computer Science under the supervision of Prof. Michael Goodrich at Brigham Young University. He works at the intersection of robotics and interactive machine intelligence. He focuses on algorithms that bootstrap robot understanding from interaction with humans, and efficiently generate robust actions in collaborating with humans. He has received a Best Conference Paper Award from the IEEE International Conference on System, Man, and Cybernetics.

Edit - History - Print - Recent Changes - Search
Page last modified on April 14, 2018, at 01:41 AM