Overview

tabletop_objects is a package for detecting and localizing objects located on some kind of supporting planar surface (table, countertop).

The object detection is performed in two steps: a bottom-up step that tries to come up with initial estimation of the object's pose, followed by a top-down model fitting step that tries to fit 3D models at each location estimated in the first step.

Bottom-up pose estimation

In the bottom-up pose estimation the initial object poses are estimated. This is currently done by using the dense point cloud coming from the stereo camera (with projected pattern). The point cloud is first filtered to remove the points corresponding to the table plane, then the remaining point clouds are grouped into clustered using a mean-shift clustering and further filtered using priors on cluster size and proximity. The remaining clusters used to estimate the initial object poses.

tabletop_clusters_cloud.png

tabletop_clusters.png

We are working on improving the object pose initialization pipeline described above by incorporating additional cues about the objects's location and orientation using a set of 3D features.

This step is implemented in the tabletop_detector node.

Top-down model fitting

In this step, the object pose estimations computed in the bottom-up step are used to fit 3D models to their respective locations. Doing that allows us to determine the object identity and refine the object's pose with high accuracy. For performing the model fitting we are using an ICP-like (Iterative Closest Point) algorithm combined with a 3D distance transform for fast nearest point computations.

tabletop_models_cloud.png

tabletop_models.png

This step is implemented in the model_fitter node.

Nodes

tabletop_detector

tabletop_detector performs the first step (bottom-up detection) described above

Subscribed Topics

narrow_stereo/left/image_rect (sensor_msgs/Image)
  • Rectified left narrow stereo image.
narrow_stereo/disparity (sensor_msgs/Image)
  • Dispartity of the narrow stereo.
narrow_stereo/disparity_info (stereo_msgs/DisparityInfo)
  • Dispartity info of the narrow stereo.
narrow_stereo/right/image_rect (sensor_msgs/Image)
  • Rectified right narrow stereo image.
tf_message (tf/tfMessage)
  • tf transforms.
narrow_stereo/cloud (sensor_msgs/PointCloud)
  • Narrow stereo pointclound.
narrow_stereo/left/camera_info (sensor_msgs/CameraInfo)
  • Narrow stereo camera info.

Published Topics

tabletop_detector/objects (sensor_msgs/PointCloud)
  • Publishes a point cloud with all the objects point cloud after table plane was removed.
visualization_marker (visualization_msgs/Marker)
  • Publishes various markers used for debugging purposes.
track_object (tabletop_objects/TrackObject)
  • Publishes information used to initialize the model_tracker node

Services

table_top/find_object_poses (tabletop_objects/FindObjectPoses)
  • Service called for initiating the object detection.

Parameters

~display (bool, default: false)
  • Used to enable the display of debugging information.
~min_points_per_cluster (int, default: 10)
  • Minimum points each cluster representing an object should have.
~max_clustering_iterations (int, default: 7)
  • Maximum of clustering iterations to perform.
~min_object_distance (double, default: 0.05)
  • Minimum distance between two distinct objects.
~min_object_height (double, default: 0.05)
  • Mimimum height of an object.
~target_frame (string, default: "odom_combined")
  • Target frame in which to return the object pose.

model_fitter

model_fitter implements the top-down model fitting described above.

Published Topics

visualization_marker (visualization_msgs/Marker)
  • Publishes various markers used for debugging purposes.

Services

tabletop_objects/model_fit (tabletop_objects/ModelFit)
  • Starts a model fit on a point cloud.

Parameters

~template_path (string, default: "")
  • The path where the object models are stored (currently the node can handle models in PLY format).

model_tracker

model_tracker tracks a model after it was initially detected by the previous detection pipeline.

Subscribed Topics

track_object (tabletop_objects/TrackObject)
  • Receives the target object to track from the tabletop_detector node and initialized tracking.
narrow_stereo/cloud (sensor_msgs/PointCloud)
  • Narrow stereo pointclound.

Published Topics

visualization_marker (visualization_msgs/Marker)
  • Publishes various markers used for debugging purposes.

Parameters

~template_path (string, default: "")
  • The path where the object models are stored (currently the node can handle models in PLY format).

Tutorials

This is a tutorial on how to use functionality provided by this package.

  • Build the tabletop_objects package:

    rosmake tabletop_objects
  • Make sure the robot is up, the projected pattern is on and that the stereo camera is working. For help with the stereo camera see this.

  • Launch the nodes:

    roscd tabletop_objects
    roslaunch launch/model_fitter.launch
  • At this point the tabletop object detection pipeline is up and ready to go, however it will only start the detection when the table_top/find_object_poses service is called. For a quick test (to make sure everything is running properly), you can run the 'test_service' node, which just calls the above service:

    rosrun tabletop_objects test_service

    For a more complex example, of how to use the tabletop_objects detector and integrate it with the planing pipeline, see the 'simple.cpp' source file in the tabletop_scripts package.

Video

Wiki: tabletop_objects (last edited 2010-01-08 21:37:18 by KenConley)