Improved AR Markers for Topological Navigation

Description: This entry uses the Kinect point cloud data to improve localization of AR Toolkit markers. We will use these markers to allow topological localization, and kinect point cloud data will be used for obstacle avoidance, to navigate on a robot with limited processing power.

Submitted By: Michael Ferguson

Keywords: navigation, kinect, topological, netbook, marker

Summary

We improve localization of AR Markers by incorporating point cloud data in our ar_kinect package. We proceed with the initial estimation of the marker location from AR toolkit. This returns the marker center in the 2D image, from which we find the (X,Y,Z) coordinates using the registered point cloud.

pcl is then used to compute surface normals for points in our cloud. The surface normal is used to correct the initial estimation of the orientation of the marker. This method works especially well when using markers on walls for localization of a robot.

We have successfully run marker recognition at 30hz on a netbook, using VGA color images and QQVGA point clouds (the ar_kinect node automatically adjusts point selection when the point cloud is smaller than the color image). Localization was tested using the locator package on the trike robot.

Video

How to Reproduce Your Entry

Code to Checkout

These instructions assume you have installed a desktop variant of ROS (including rviz for visualizations). They further assume that you are installing 3rd party stacks into ~/ros/stack-name-pkg and that ~/ros is on your ROS package path:

  1. Install ni (if using cturtle, you will also want to checkout perception_pcl and perception_pcl_addons from SVN trunk).

  2. Install ccny_vision stack for the original ar_pose and ar_toolkit packages:

     cd ~/ros/stacks
     git clone http://robotics.ccny.cuny.edu/git/ccny-ros-pkg/ccny_vision.git
     cd ~/ros/stacks/ccny_vision
     rosmake ccny_vision --rosdep-install
  3. Install albany_vision stack for the ar_kinect package:

     cd ~/ros/stacks
     svn co http://albany-ros-pkg.googlecode.com/svn/trunk/ albany-ros-pkg
     cd ~/ros/stacks/albany-ros-pkg/albany_vision/ar_kinect
     rosmake ar_kinect
  4. (Optional) Build locator package:

     cd ~/ros/albany-ros-pkg/albany_nav/locator
     rosmake locator

Running the Demo

There is a demo launch file to try out ar_kinect:

  1. Print out marker #9 (PDF)

  2. Launch a roscore
  3. Plug in your Kinect and launch openni_camera:
     roslaunch openni_camera openni_kinect.launch
  4. Launch ar_kinect:
     roslaunch ar_kinect ar_kinect.launch
  5. Launch rviz and see results!

Dependencies

Software

Hardware -- see details of our robot trike.

  • Kinect
  • iRobot Create
  • Some AR Markers (gif files can be found in the ar_pose/data/ directory)

Future Work

The ar_kinect package is stable. Currently, the locator package requires a hard coded map, getting rid of this hardcoded map is a major point of future work. Future work also includes finishing trajectory control so that the robot can navigate autonomously in the space. trike is also hoping he might be able to loose a bit of his wide head syndrome if he gets a Primesense Developer Kit device....

Wiki: openni/Contests/ROS 3D/Improved AR Markers for Topological Navigation (last edited 2012-08-24 11:08:22 by JulienProvost)