October 2010 Archives

C Turtle Update: Now with Orocos

| No Comments | No TrackBacks

cturtle_poster.jpgA new C Turtle Update has been released. This update is significant in that we have the first externally managed ROS stack available for download: orocos_toolchain_ros. This stack, produced by kul-ros-pkg, provides integration between ROS and the Orocos software framework.

We also have several stacks related to Diamondback. roshpit allows users to download and test the upcoming rosh shell, which will officially debut in ROS 1.4. There are also new empty ros_comm and rx stacks, which have been added so that stacks can be compatible with both C Turtle and Diamondback packaging, once the "ros" stack has been split.

General Updates:

Diamondback-compatibility Updates:

PR2 and WG Updates:

ROS 1.3.0 Released, Unstable

| No Comments | No TrackBacks

ROS 1.3.0 has been released. This is an unstable release and is the first of the 1.3.x "odd-cycle" releases. During this release cycle, we expect to rapidly integrate new features and the stack is expected to be volatile.

In particular, much of the work for the ROS 1.3.x release cycle is related to REP 100, ROS Stack Separation. There are several major changes in this initial release, with more to come. For ROS 1.3.0, we have implemented the following changes:

  • genmsg_cpp has been deleted. This ends support for the current experimental rosoct and rosjava libraries.
  • All rx* packages have been moved to the rx stack. This was done to remove heavyweight WxWindows dependencies from the ROS stack.
  • rosdoc has been moved to the new documentation stack. This was done to remove heavyweight Doxygen, Epydoc, and Sphinx dependencies from the ROS stack.

The next major change will be to migrate the ROS middleware libraries to the ros_comm stack. This will mainly have impact on stack dependency declarations.

We've also included deprecation warnings in the rosrecord and rosbagmigration packages for this release. Use of these APIs should be ported to the new rosbag library, which has support for the latest bag format.

For more changes, please the ROS 1.3.0 changelist.

Also, you may wish to now consult the rx and documentation stack changelists for packages that were formerly in the ROS stack.

For more information on the ROS 1.3.x releases, you can see REP 101, ROS 1.4 Release Schedule.

FreeBSD package for ROS

| No Comments | No TrackBacks

freebsd.png

Rene Ladan has been working hard over the past several months providing patches to ROS in order to improve FreeBSD compatibility. Now he's announced the first FreeBSD package for ROS, which makes installation patch-free and no-hassle.

Hi,

The first FreeBSD package for ROS is now available, providing the equivalent of ros-cturtle-ros on Ubuntu (but without the build directories). The cleaned up instructions are available at the wiki.

Thanks go to the ROS developers for applying all the patches I sent to them.

I will probably add more ROS packages in the future, starting with the tutorials and the Lego NXT stacks (the latter because that's the only robot I have myself, apart from some webcams).

Regards,
Rene

http://www.rene-ladan.nl/

veltrop-rviz-stereo-screen.jpg

Taylor Veltrop has announced veltrop-ros-pkg as well as tools for Roboard-based humanoids

I am pleased to announce the Veltrop ROS Repository!

If any of you out there are using using small servo based robots, especially humanoid, then then check this out!

The Veltrop ROS Repository leverages ROS to get hobbyists and researchers quickly up and running with the Roboard operating a humanoid robot.

The Roboard is a small 1Ghz 486 platform that has built in PWM control, and many IO ports:

Info on KHR style humanoid

The repository consists of a stack suitable for the Roboard, and another stack specialized for small joint based robots.

The hobby community seems to be reinventing the wheel with each person that combines an embedded PC with one of these humanoid robots. When the beginner tries to do this it's too daunting, and for others it is very time consuming. So I hope to alleviate this, and get some help back too.

Here's a summary of some of the features:

  • Pose the robot based on definitions in an XML file
  • Execute motions by running a series of timed poses (XML)
  • Stabilization via gyro data
  • Definition of a KHR style robot linkage for 3D virtual modeling and servo control (URDF)
  • Calibrate trim of robot with GUI
  • Calibrate gyro stabilization with GUI
  • Import poses and trim (not motions) from Kondo's Heart2Heart RCB files
  • Control robot remotely over network with keyboard
  • Control robot with PS3 controller over bluetooth
  • Support for HMC6343 compass/tilt sensor
  • Support for Kondo gyro sensors
  • Stereo video capture and processing into point cloud
  • CPU heavy tasks (such as stereo processing) can be executed on remote computer
  • Controls Kondo PWM servos

Here's some missing parts (maybe others would like to contribute here?)

  • Control Kondo serial servos
  • GUI for editing and running poses/motions
  • Tool to capture poses
  • More sophisticated motion scripting
  • GUI for calibration of A/D inputs

My next goals for this project are to incorporate navigation, and arm/gripper trajectory planning.

The documentation is here: http://taylor.veltrop.com/robotics/khrhumanoidv2.php?topic=veltrop-ros-pkg There's a lot of other relevant information to the robot throughout the site.

The repository is hosted on sourceforge: http://sourceforge.net/projects/veltrop-ros-pkg

I hope someone out there has a chance to try this out and contribute!

Taylor

asl_robots_640w.png

The Autonomous Systems Lab (ASL) at ETH Zurich is interested in all kinds of robots, provided that they are autonomous and operate in the real world. From mobile robots to micro aerial vehicles to boats to space rovers, they have a huge family of robots, many of which are already using ROS.

As ASL is historically a mechanical lab, their focus has been on hardware rather than software. ROS provides them a large community of software to draw from so that they can maintain this focus. Similarly, they run their own open-source software hosting service, ASLforge, which promotes the sharing of ASL software with the rest of the robotics community. Integrating with ROS allows them to more easily share code between labs and contribute to the growing ROS community.

The list of robots that they already have integrated with ROS is impressive, especially in its diversity:

  • Rezero: Rezero is a ballbot, i.e. a robot that balances and drives on a single sphere.
  • Magnebike: Magnebike is a compact, magnetic-wheeled inspection robot. Magnebike is designed to work on both flat and curved surfaces so that it can work inside of metal pipes with complex arrangement. A rotating Hokuyo scanner enables them to do research on localization in these complex 3D environments.
  • Robox: Robox is a mobile robot designed for tour guide applications.
  • Crab: Crab is a space rover designed for navigation in rough outdoor terrain.
  • sFly: The goal of the sFly project is to develop small micro helicopters capable of safely and autonomously navigating city-like environments. They currently have a family of AscTec quadrotors.
  • Limnobotics: The Limnobotics project has developed an autonomous boat that is designed to perform scientific measurements on Lake Zurich.
  • Hyraii: Hyraii is a hydrofoil-based sailboat.

That's not all! Stéphane Magnenat of ASL has contributed a bridge between ROS and the ASEBA framework. This has enabled integration of ROS with many more robots, including the marXbot, handbot, smartrob, and e-puck. ASL also has a Pioneer mobile robot using ROS, and their spinout, Skybotix, develops a coax helicopter that is integrated with ROS. Not all of ASL's robots are using ROS yet, but there is a chance that we will soon see ROS on their walking robot, autonomous car, and AUV.

ASL has created an ASLForge project to provide ROS drivers for Crab, and they will be working over the next several months to select more general and high-quality libraries to release to the ROS community.

ASL's family of robots is impressive, as is their commitment to ROS. They are single-handedly expanding the ROS community in a variety of new directions and we can't wait to see what's next.

Many thanks to Dr. Stéphane Magnenat and Dr. Cédric Pradalier for help putting together this post.

Probabilistic Grasp Planning

| No Comments | No TrackBacks

cross-posted from willowgarage.com

One of the challenges that robots like the PR2 face is knowing how to grasp an object. We have years of experience to help us determine what objects are and how to grasp them. We can tell the difference between a mug, and wine glass, and a bowl, and know that they each should be handled in a different way. For robots, the world is not as certain, but there are approaches they can take that let them interact in an uncertain world.

This summer, Peter Brook from the University of Washington wrote a grasp planning system which lets robots successfully pick up objects, even in cases where they make incorrect guesses about what the object is. This planner uses a probabilistic approach, where the robot uses potentially incomplete or noisy information from its sensors to make multiple guesses about the identity of the object it is looking at. Based on how confident the robot is in each of the possible explanations for the perceived data, it can select the grasps that are most likely to work on the underlying object.

First, the planner builds up a set of representations for the sensed data; some are based on the best guesses provided by ROS recognition algorithms, and some use the raw segmented 3D data. For each representation, it uses a grasp-planning algorithm to generate a list of possible grasps. It then combines the information from all these sources, sorting grasps based on their estimated probability of success across all the representations. For grasp planners running on known object models, it can also use pre-computed grasps that speed up execution time.

This probabilistic planner allows the PR2 robot to cope with uncertainty and reliably grasp a wider range of objects in unstructured environments. It is also integrated into the ROS object manipulation pipeline so that others can experiment and improve upon it. For more information, please see Peter's slides below (download PDF), or checkout the source code in the probabilistic_grasp_planner package on ROS.org.

STOMP Motion Planner

| No Comments | No TrackBacks

cross-posted from willowgarage.com

Robot motion planning has traditionally been used to avoid collisions when moving a robot arm. Avoiding collisions is important, but many other desirable criteria are often ignored. For example, motions that minimize energy will let the robot extend its battery life. Smoother trajectories may cause less wear on motors and can be more aesthetically appealing. There may be even more useful criteria, like keeping a glass of water upright when moving it around.

stomp_pole.pngThis summer, Mrinal Kalakrishnan from the Computational Learning and Motor Control Lab at USC worked on a new motion planner called STOMP, which stands for "Stochastic Trajectory Optimization for Motion Planning". This planner can plan paths for high-dimensional robotic systems that are collision-free, smooth, and can simultaneously satisfy task constraints, minimize energy consumption, or optimize other arbitrary criteria. STOMP is derived from gradient-free optimization and path integral reinforcement learning techniques (Policy Improvement with Path Integrals, Theodorou et al, 2010).

The accompanying video shows the STOMP planner being used to plan motions for the PR2 arm in simulation and a real-world setup. It shows the ability to plan motions in real-world environments, while optimizing constraints like holding the cans upright at all times. Ultimately, the utility of this motion planner is limited only by the creativity of the system designer, since it can plan trajectories that optimize any arbitrary criteria that may be important to achieve a given task.

For more information, please see Mrinal's slides below (download pdf), or checkout the code in the stomp_motion_planner package on ROS.org. This package builds on the various packages in the policy_learning stack, which was written in collaboration with Peter Pastor. You can also checkout Mrinal's work from last summer on the CHOMP motion planner.

Online Planning for Sensing Objects

| No Comments | No TrackBacks

cross-posted from willowgarage.com

Nobody likes to wait, even for a robot. So when a personal robot searches for an object to deliver, it should do so in a timely manner. To accomplish this, Feng Wu from the University of Science and Technology of China spent his summer internship developing new techniques to help PR2 select which sensors to use in order to more quickly find objects in indoor environnments.

Robots like PR2 have several sensors, but they can be generally categorized into two types: wide sensing and narrow sensing. Wide sensing covers larger areas and greater distances than narrow sensing, but the data may be less accurate. On the other hand, narrow sensing can use more power and take more time to collect and analyze. Feng worked on planning techniques to balance the tradeoffs between these two types of sensing actions, gathering more information while minimizing the cost.

The techniques involved the use of the Partially Observable Markov Decision Process. POMDP provides an ideal mathematical framework for modeling wide and narrow sensing. The sensing abilities and uncertainty of sensing data are modeled in an observation function. The cost of sensing actions can be defined in a reward function. The solution balances the costs of the sensing action and the rewards (i.e., the amount of information gathered).

For example, one part of the search tree might represent the following: "If I move to the middle of the room, look around and observe clutter to my left, then I will be very sure that there is a bottle to the left, moderately sure that there is nothing in front of me, and completely unsure about the rest of the room." When actually performing tasks in the world, the robot will receive observations from its sensors. These observations will be used to update its current beliefs. Then the robot can plan once again, using the updated beliefs. Planning in belief space allows making tradeoffs such as: "I'm currently uncertain about where things are, so it's worth taking some time to move to the center of the room to do a wide scan. Then I'll have enough information to choose a good location toward which to navigate."

For more information, you can read Feng's slides. You can also check out the find_object stack on ROS.org.

ROS interface for the Parrot AR.Drone

| No Comments | No TrackBacks

parrot_ardrone3.jpgNate Roney from the Mobile Robotics Lab at SIUE has announced drivers for the Parrot AR.Drone, as well as the siue-ros-pkg repository

Greetings everyone,

I'd like to share a project I've been working on with the ROS community.

Some may be familiar with the Parrot AR.Drone: an inexpensive quadrotor helicopter that came out in September. My lab got one, but I was pretty disappointed that it didn't have ROS support out of the box. It does have potential, though, with 2 cameras and a full IMU, so it seemed like a worthwhile endeavor to create a ROS interface for it.

So, I would like to announce the first public release of the ROS interface for the AR.Drone. Currently, it allows control of the AR.Drone using a geometry_msgs/Twist message, and I'm working on getting the video feed, IMU data and other relevant state information published as well. Unfortunately, the documentation on how the Drone transmits it's state information is a bit sparse, so getting at the video (anyone with experience converting H.263 to a sensor_msgs/Image, get in touch!) and IMU data are taking more time than I'd hoped, but it's coming along. Keep an eye on the ardrone stack, it will be updated as new features are added.

For now, anyone hoping to control their AR.Drone using ROS, this is the package for you! Either send a Twist from your own code, or use the included ardrone_teleop package for manual control.

You can find the ardrone_driver and ardrone_teleop packages on the experimental-ardrone branch of siue-ros-pkg, which itself never had a proper public release. This repository represents the Mobile Robotics Lab at SIUE, and contains a few utility nodes I have developed for some of our past projects, with more packages staged for addition to the repository once we have time to document them properly for a formal release.

http://github.com/siue-cs/siue-ros-pkg

http://github.com/siue-cs/siue-ros-pkg/tree/experimental-ardrone

I'm hopeful that someone will find some of this useful. Feel free to contact me with any questions!

Cheers,
Nate Roney

ROS/ASEBA Bridge

| No Comments | No TrackBacks

marxbot-complete-detour.jpgStéphane Magnenat from the Autonomous System Lab at ETH Zurich has announced a ROS/ASEBA bridge

Dear list,

Thanks to your quick and precise answers, I have programmed a bridge between ASEBA and ROS:

http://github.com/stephanemagnenat/asebaros

This bridge allows to load source code, inspect the network structure, read and write variables, and send and receive events from ROS.

This brings ROS to the following platforms:

  • Mobots' marxbot, handbot and smartrob
  • e-puck

Kind regards,
Stéphane

Embedded project for ROS (eros)

| No Comments | No TrackBacks

Daniel Stonier from Yujin Robot has been bringing up an embedded project for ROS, eros. Below is his announcement to ros-users

Lets bring down ROS! ...to the embedded level.

Greetings all,

Firstly, my apologies - couldn't resist the pun.

This is targeted at anyone who is either working with a fully cross-compiled ros or simply using it as a convenient build environment to do embedded programming with toolchains.

Some of you might remember myself sending out an email to the list about getting together on collaborating for the ROS at the embedded level rather than having us all flying solo all the time. Since then, I'm happy to say, Willow has generously offered us space on their server to create a repository for supporting embedded/cross-compiling development which has now been kick-started with a relatively small, but convenient framework that we've been using and testing at Yujin Robot for a while. The lads there have been excellent guinea pigs, particularly since most of them were very new to linux and had absolutely no or little experience in cross-compiling.

Eros

A quick summary of what we have there so far:

If you want to take the tools for a test run, simply svn eros into your stacks directory of your ros install. e.g.

roscd cd ../stacks svn co https://code.ros.org/svn/eros/trunk ./eros

Getting Involved

But, what would be great at this juncture would be to have other embedded beards jump on board and get involved.

  • Tutorials on the wiki - platform howtos, system building notes...
  • General discussion on the eros forums.
  • Feedback on the current set of tools.
  • New ideas.
  • Diagnostic packages.
  • New toolchain/platform modules.
  • Future development

If you'd like to get involved, create an account on the wiki/project server and send me an email (d.stonier@gmail.com).

Future Plans

The goals page outlines where I've been thinking of taking eros, but of course this is not fixed and as its early, very open to new ideas. However, two big components I'd like to address in the future include:

Embedded package installer - a package+dependency chain (aka rosmake) installer. This is a bit different to Willow's planned stack installer, but will need to co-exist alongside it and should use as much of its functionality as possible.

Abstracted System Builder as an Os - hooking in something like OpenEmbedded as an abstracted OS that can work with rosdeps.

and of course, making the eros wiki alot more replete with embedded knowledge.

Kind Regards,
Dr. Daniel Stonier.

crossposted from willowgarage.com

Simple trial and error is one the most common ways that we learn how to perform a task. While we can learn a great deal by watching and imitating others, it is often through our own repeated experimentation that we learn how -- or how not -- to perform a given task. Peter Pastor, a PhD student from the Computational Learning and Motor Control Lab at the University of Southern California (USC), has spent his last two internships here working on helping the PR2 to learn new tasks by imitating humans, and then improving that skill through trial and error.

Last summer, Peter's worked focused on imitation learning. The PR2 robot learned to grasp, pour, and place beverage containers by watching a person perform the task a single time. While it could perform certain tasks well with this technique, many tasks require learning about information that cannot necessarily be seen. For example, when you open a door, the robot cannot tell how hard to push or pull. This summer, Peter extended his work to use reinforcement learning algorithms that enable the PR2 to improve its skills over time.

Peter focused on teaching the PR2 two tasks with this technique: making a pool shot and flipping a box upright using chopsticks. With both tasks, the PR2 robot first learned the task via multiple demonstrations by a person. With the pool shot, the robot was able to learn a more accurate and powerful shot after 20 minutes of experimentation. With the chopstick task, the robot was able to improve its success from 3% to 86%. To illustrate the difficulty of the task, Peter conducted an informal user study in which 10 participants were performed 20 attempts at flipping by box by guiding the robot's gripper. Their success rate was only 15%.

Peter used Dynamic Movement Primitives (DMPs) to compactly encode movement plans. The parameters of these DMPs can be learned efficiently from a single demonstration by guiding the robot's arms. These parameters then become the initialization of the reinforcement learning algorithm that updates the parameters until the system has minimized the task-specific cost function and satisfied the performance goals. This state-of-the-art reinforcement learning algorithm is called Policy Improvement using Path Integrals (PI^2). It can handle high dimensional reinforcement learning problems and can deal with almost arbitrary cost functions.

Programming robots by hand is challenging, even for experts. Peter's work aims to facilitate the encoding of new motor skills in a robot by guiding the robot's arms, enabling it to improve its skill over time through trial and error. For more information about Peter's research with DMPs, see "Learning and Generalization of Motor Skills by Learning from Demonstration" from ICRA 2009. The work that Peter did this summer has been submitted to ICRA 2011 and is currently under review. Check out his presentation slides below (download PDF). The open source code has been written in collaboration with Mrinal Kalakrishnan and is available in the ROS policy_learning stack.

cross-posted from willowgarage.com

This summer, Hae Jong Seo, a PhD student from the Multidimensional Signal Processing Research Group at UC Santa Cruz, worked with us on object and action recognition using low-cost web cameras. In order for personal robots to interact with people, it is useful for robots to know where to look, locate and identify objects, and locate and identify human actions. To address these challenges, Hae Jong's implemented a fast and robust object and action detection system using features called locally adaptive regression kernels (LARK).

LARK features have many applications, such as saliency detection. Saliency detection determines which parts of an image are more significant, such as containing objects or people. You can then focus your object detection on the salient regions of the image in order to detect more quickly. Saliency detection can be extended to "space-time" for use with video streams.

LARK features can also be used for generic object and action detection. As you can see in the video, objects such as door knobs, the PR2 robot, and human faces and be detected using LARK. Space-time LARK can also detect human actions, such as waving, sitting down, and getting closer to the camera.

For more information, see the larks package on ROS.org or see Hae Jong's slides below (download PDF). You can also consult Peyman Milanfar's publications for more information on these techniques.

The Humanoid Robots Lab at the University of Freiburg is using the Aldebaran Nao robot to do a variety of research, from climbing stairs, to imitating human motions, to footstep planning. One of their Naos, nicknamed "Osiris", has a special modification: a Hokuyo laser rangefinder head. This modification enables their research on localization for humanoid robots in complex environments.

Localization on humanoid robots is much more difficult due to the shaking motion of the robot while moving. Using techniques that will be outlined in an upcoming IROS paper [1], they are able to do 6D localization of the Nao's torso based on laser, odometry, IMU, and proprioception data. In the video above, you can see Osiris localizing itself while walking and climbing stairs.

The researchers at Uni Freiburg have been long-time contributors to ROS and run their own alufr-ros-pkg open source repository, which contains libraries for articulation models, 3d occupancy grids (OctoMap), and a Nao stack that builds on Brown's Nao driver to provide additional ROS integration.

Uni Freiburg hopes to build on their research with humanoids to work towards a full navigation stack for humanoids. This will include a footstep planning library, which they will be releasing in alufr-ros-pkg soon. Below are some screenshots of their 3D scans and footstep plans in rviz.

[1] "Humanoid Robot Localization in Complex Indoor Environments" by Armin Hornung, Kai M. Wurm, and Maren Bennewitz (to be presented at IROS 2010).

Previously: Robots Using ROS: Aldebaran Nao

osiris_plan_intro.png

osiris_3d_1.png

crossposted from willowgarage.com

Bastian Steder, a PhD student from the Autonomous Intelligent Systems Group at the University of Freiburg, Germany, spent the summer at Willow Garage implementing an object recognition system using 3D point cloud data. With 3D sensors becoming cheaper and more widely available, they are a valuable tool for robot perception. 3D data provides extra information to a robot, such as distance and shape, that enables different approaches to identifying objects in the world. Bastian's work focused on using databases of 3D models to identify objects in this 3D sensor data.

The main focus for Bastian's work was on the feature-extraction process for 3D data. One of his contributions was a novel interest keypoint extraction method that operates on range images generated from arbitrary 3D point clouds. This method explicitly considers the borders of the objects identified by transitions from foreground to background. Bastian also developed a new feature descriptor type, called NARF (Normal Aligned Radial Features), that takes the same information into account. Based on these feature matches, Bastian then worked on a process to create a set of potential object poses and added spatial verification steps to assure these observations fit the sensor data.

The full system can identify the existence and poses of arbitrary objects, of which we have a point cloud model in a very efficient manner, using only the geometrical information provided by the 3D sensor. Code for Bastian's work, including object recognition and feature extraction, has been integrated with PCL, which is a general library for 3D geometry processing in development at Willow Garage. To find out more, check out the point_cloud_perception stack on ROS.org. For detailed technical information, you can check out Bastian's presentation slides below (download PDF).

Camera pose estimation stack

| No Comments | No TrackBacks

Steven Bellens and Koen Buys from kul-ros-pkg have announced a new camera_pose_estimation stack

We've put a first version of our camera_pose_estimation stack online, available in the kul-ros-pkg repository.

The stack builds upon the ar_pose package, but allows tracking markers with multiple cameras taking into account measurement uncertainty as provided in the ARMarker message, as opposed to the ar_pose package which allows tracking one / multiple markers with a single camera. All available estimates are converted to world coordinates and fused using an Extended Kalman Filter as provided in BFL. If you have any suggestions/comments/questions about the stack, let us know!

best regards,

Steven Bellens
Koen Buys

Link

Making Manipulation More Mobile

| No Comments | No TrackBacks

crossposted from willowgarage.com

Adam Harmat from McGill University worked on three projects this summer to make the PR2 more dexterous when manipulating objects: a monitoring system for arm movement, a persistent 3D collision map, and a multi-table manipulation application. All of these projects demonstrated how increased knowledge of its environment is necessary for improving PR2's mobile manipulation capabilities.

The arm-monitoring system uses head-mounted stereo cameras to detect new obstacles. While the arm moves, the PR2 looks at locations that are a few seconds ahead of the arm's current position. Any detected obstacles are added to a collision map, and, if a future collision is anticipated, the arm stops and waits. If the new obstacle doesn't move, the PR2 will attempt to move around it.

The collision map was improved to store information about everything the robot has previously seen. This allows the PR2 to perform tasks that require it to relocate as it maintains knowledge about places it currently cannot see. This new collision map is based on Octomap, an open source package from the University of Freiburg. The octree structure of Octomap is more compact and also enables the storing of probabilistic values.

No one wants a clumsy robot. As a result of these projects, the PR2 is able to maintain more knowledge about its local environment, and is able to keep its arms from bumping into objects. Adam developed a demo application to demonstrate these new capabilities.

In his multi-table manipulation demo, the PR2 continuously finds and moves objects between separate tables. This application is integrated with the ROS navigation stack to determine pickup locations and navigate between tables. Adam's multi-table application demonstrates how planning with the persistent collision map can be integrated with base movement and local task-execution into a complete system.

For more information, you can view Adam's presentation slides below (download as PDF), or checkout the move_arm_head_monitor and the multi_table_detector packages on ROS.org.