Grasp Pose Detection In Dense Clutter
We present a grasp detection method that can be used to localize robotic grasp configurations directly from sensor data without estimating object pose. Our method takes a point cloud from an RGBD-camera, like the Microsoft Kinect, as input and produces 6-DOF grasp pose estimates. We use an algorithmic framework that first generates a large number of grasp candidates and then uses machine learning to predict if a candidate is a viable grasp. For the second step, we first used HOG features to train a support vector machine. We later-on improved this step by training a convolutional neural network. We discuss a number of ways to improve grasp detection performance for the latter method. To evaluate our methods, we conducted a series of robotic experiments in a dense clutter tabletop scenario. In addition, we will present recent results on applying our method in a mobile manipulation scenario and on detecting grasps on objects of interest.
Andrea ten Pas is currently a Ph.D. student in the Helping Hands Lab in the College of Computer and Information Science at Northeastern University, advised by Professor Robert Platt. His main interests are in the field of perception and robotics. In particular, focusing on perception for robotic grasping and manipulation. Andrea is also the author of several ROS packages that handle grasping objects in cluttered environments.