RGBD-6D-SLAM

Description: The Kinect is used to generate a colored 3D model of an object or a complete room.

Submitted By: Felix Endres, Juergen Hess, Nikolas Engelhard, Juergen Sturm, Daniel Kuhner, Philipp Ruchti, Wolfram Burgard

Keywords: RGBD-SLAM, 3D-SURF, Feature Matching, RANSAC, Graph SLAM, Model Generation, Real-time

This page describes the software package that we submitted for the ROS 3D challenge. The current RGBD-SLAM package is located here.

Summary

We developed a novel method to quickly acquire colored 3D models of objects and indoor scenes with a hand-held Kinect camera. We extract SURF features from the camera image and localize them in 3D space. We match these features between every pair of acquired images, and use RANSAC to robustly estimate the 3D transformation between them. To achieve real-time processing, we match the current image only versus a subset of the previous images with decreasing frequency. Subsequently, we construct a graph whose nodes correspond to camera views and whose edges correspond to the estimated 3D transformations. The graph is then optimized to reduce the accumulated pose errors. This is especially useful after the user walked around with the camera and revisits a previously encountered part of the scene. This allows us to deal with loop closures, i.e., after the user has moved the camera around an object and returned to a known pose. A greatly enhanced version of this software is available here

Video

In the video we show the generation of a model of our PR2 and a 360° "panorama" of the students lab. The video is played back at real-time (1x). In the visualization in RVIZ, the colored point cloud displays the estimated world model. The colored axes correspond to the estimated camera poses, and the white lines indicate the transformations between these poses.

Data

As a sample for building 3D object models, here is an example of a box and a mug.

The scene has been captured with several snapshots instead of a continuous run, to reduce the size of the resulting file.

The bag file can be used to play the globally corrected point clouds, e.g., for visualization with rviz: rosbag play <bagfile>. The relevant topics are tf and /rgbdslam/batch_clouds. For convenience you can use this rviz config file with rosrun rviz rviz -d <vcg-file>, to listen to the topics from the bag file.

The pcd file is viewable with the pcd viewer (rosrun pcl_visualization pcd_viewer -ax 0.1 <file>). Press 5 to view the cloud in color.

Unfortunately the data shown in the video was lost in the last-minute-rush before the deadline, but we plan to publish more data, so bear with us. In the meanwhile we would be interested to see what you can do with it and hear your feedback. See below for reproducing information.

How to Reproduce The Entry

Please note, that a greatly improved newer release of this software is available at openslam.org.

Installation instructions for the new release can be found at openslam.org. The following instructions concern the original version, as submitted to the contest.

This package requires ROS diamondback. In detail, we require the following ROS stacks and packages:

  • ros-diamondback-ros

  • ros-diamondback-perception-pcl-addons

  • ros-diamondback-vision-opencv

1. Install openni driver as explained on the ROS wiki. Make sure to follow the instructions for diamondback.

2. Checkout the hogman_minimal package:

svn checkout http://alufr-ros-pkg.googlecode.com/svn/trunk/freiburg_tools/hogman_minimal

3. Checkout the repository:

svn checkout http://alufr-ros-pkg.googlecode.com/svn/branches/freiburg_kinect-experimental/

4. Compile our package:

rosmake --rosdep-install rgbdslam

5. Start the openni driver:

roslaunch openni_camera openni_node.launch

6. Run the launch file:

roslaunch rgbdslam rgbdslam.launch

This launch file will start RGBD-SLAM and rviz. The program at first is in a "Pause" mode. It will start capturing data from the camera by pressing "CTRL-P" or selecting "Pause" from the file menu.

Rviz visualizes the resulting and corrected pointcloud and camera positions. We recommend the following settings:

  1. Set the fixed frame to /openni_camera

  2. Add a PointCloud2 display and subscribe to /rgbdslam/transformed_slowdown_cloud. This displays the current pointcloud based on the current estimate of camera poses. You can set the decay time to -1, to accumulate the world model. To see a colored point cloud, set the color transformer in rviz to RGB8.

  3. Add a Visualization Marker display and subscribe to /camera_pose_graph. These markers display the current estimation of all camera pose.

  4. Add another Visualization Marker: display and subscribe to /slam_transform. This topic visualizes the current camera pose.

  5. To visualize the entire world model with applied retrospective corrections, add another PointCloud2 display, subscribe to /rgbdslam/batch_clouds and set the decay time to -1. You can then use the "Send Final Model" command in the rgbdslam GUI (see below). Depending on the size of the model, this may be very resource intensive, as a lot of points need to be send to and rendered by rviz. Afterwards, the program will continue as before.

If you want to reproduce the results, keep in mind to

  • 1 Move slowly (features are sensitive to motion blur and large transformations) 1 Provide good lighting conditions (better for feature detection and matching) and 1 Keep your distance to the object (as the kinect has no depth information otherwise).

Our GUI shows the current camera pose as a 4x4 matrix as well as the captured picture. The GUI allows to controll the program as follows:

  • CTRL-P: toggle Pause

  • CTRL-S: send whole world model of the optimized graph

    • (Note that this might take a long time depending on the size of the model and the amount of memory available.)
  • CTRL-R: reset and start capturing from the beginning

Code to Checkout

svn checkout http://alufr-ros-pkg.googlecode.com/svn/branches/freiburg_kinect-experimental/

Dependencies

  • OpenCV (for the SURF implementation)
  • PCL (for the point clouds)
  • Qt4 (for the GUI)

Corresponding Author

Felix Endres

Publications

Nikolas Engelhard and Felix Endres and Jürgen Hess and Jürgen Sturm and Wolfram Burgard, "Real-time 3D visual SLAM with a hand-held RGB-D camera", Proc. of the RGB-D Workshop on 3D Perception in Robotics at the European Robotics Forum, Vasteras, Sweden, 2011.

@InProceedings{ engelhard11euron-workshop,
    TITLE = "Real-time 3D visual SLAM with a hand-held RGB-D camera",
    AUTHOR = "Nikolas Engelhard and Felix Endres and J{\"u}rgen Hess and J{\"u}rgen Sturm and Wolfram Burgard",
    BOOKTITLE = "Proc. of the RGB-D Workshop on 3D Perception in Robotics at the European Robotics Forum",
    ADDRESS = "Vasteras, Sweden",
    YEAR = "2011",
    MONTH = "April",
} 

Wiki: openni/Contests/ROS 3D/RGBD-6D-SLAM (last edited 2013-04-24 16:36:57 by Felix Endres)