Recent Changes - Search:

MLFL Home

University of Massachusetts

MLFL Wiki

Can Robots Measure Their Own Error In Mapping Missions

We answer in the affirmative by presenting a theory for autonomous precision error estimation that has connections with compressed sensing and Fourier analysis of the symmetry group. Research in low-level computer vision in the 1990s discovered that the geometry of multiple images had enough information to perform 3-D model reconstructions of a scene independent of any knowledge of initial camera positions, orientations or even internal camera parameters. The same is possible for the precision errors of the model reconstructions. The talk will present how this seemingly paradoxical estimation can be done by geometrically separating the reconstruction errors into an accuracy and a precision component. Determining the precision or resolution of the maps is possible absent any ground truth even as the accuracy is completely unknown. The precision errors can be recovered if they are sparse enough -- a common situation given the high quality of imaging sensors and stereo matching algorithms. The talk will also illustrate the techniques of non-commutative harmonic analysis by analyzing the precision errors using induced representations of the symmetry group.

Edit - History - Print - Recent Changes - Search
Page last modified on March 02, 2008, at 11:35 AM