MLFL Wiki |
Main /
From Pixels To Layers Joint Motion Estimation And SegmentationThe segmentation of scenes into regions of coherent structure and the estimation of image motion are fundamental problems in computer vision and are often treated separately. Motion actually provides an important cue to identify surfaces in a scene, while segmentation may provide the proper support for motion estimation. We propose a probabilistic layered model to solve both problems in a principled way. We model the segmentation using thresholded spatio-temporally coherent support functions and the motion using globally coherently but locally flexible priors. Our optimization method extends standard graph cuts algorithms to simultaneously change the states of several MRFs. The proposed method achieves state-of-the-art performance on both the Middlebury optical flow benchmark and the MIT layer segmentation dataset, demonstrating the benefits of principled combination of segmentation and motion estimation. Short bio: Dr. Deqing Sun is a postdoctoral fellow at Harvard University. He is working with Dr. Hanspeter Pfister in the visual computing group. He receives his Ph.D. degree in Computer Science at Brown University, where he worked with Dr. Michael J. Black and Dr. Erik Sudderth. He has also been collaborating with Dr. Ce Liu at Microsoft Research New England. His research interest includes computer vision and machine learning, with a focus on optical flow estimation, motion segmentation, and video enhancement. |