September 2010 Archives

URDF tools from wu-ros-pkg

| No Comments | No TrackBacks

Hey all,

I wanted to announce a new package that resulted from my work this past summer. It's a handful of tools I've found useful for building urdf models. It's a stack with the clever name, urdf_tools (code).

There are four packages inside:

More information is on the respective wiki pages. Questions feedback and bug reports are most welcome.
-David Lu!!

NOTE: for discussion, please use the ros-developers list

REP 3: This REP defines target platforms for each ROS Distribution Release. We define platforms to include both operating system releases (Ubuntu Lucid) as well as major language releases (e.g. Python 2.5). The target platforms represent the set on which all stable, released stacks are expected to work. Exceptions can be made for stacks that are intentionally platform-specific.

REP 10: REP 10 outlines the ros-developers voting guidelines. These guidelines serve to provide feedback or gauge the "wind direction" on a particular proposal, idea, or feature. They don't have a binding force. REP 10 is a direct copy of PEP 10 by Barry Warsaw. The Author field of this document has been updated to reflect responsibility for maintenance.

Robots Using ROS: Kitemas LV1

| No Comments | No TrackBacks

We first covered Takashi Ogura's (aka OTL) robot projects back in March when he got the ROS PS3 joystick driver working with an i-Sobot. He has many more fun projects that are too numerous to cover: White Bear Robot (Roomba + Navigation stack), Arduino board for the i-Sobot, Twitter control for humanoid robot, and an all-time classic, humanoid robot with iPhone 3GS head.

Along the way, OTL has been putting together tutorials and previews of ROS libraries for his Japanese audience on, such as a Japanese speech node, Twitter for ROS using OAuth, URDF tutorial, Euslisp demos, and many more.

Many of those tutorials and projects came together in the video above: Kitemas LV1. Kitemas LV1 is a fun drink ordering robot that lets you order a drink and then pours it for you. Judging from previous posts, it looks like Kitemas is using a Roomba with Hokuyo laser range finder for autonomous navigation, as well as a USB web camera. Drink selection can be done either through colored coasters or a Twitter API, and the robot can be driven manually with a PS3 joystick.

Here's a software diagram that shows the various ROS nodes working together:

OTL has also created otl-ros-pkg, so readers of his blog can get code samples for his various tutorials and even see code for robots like Kitemas above. You can watch a video with a more dressed up version of Kitemas LV1 here.


Point Cloud Library (PCL) is a new library under development to support n-D point clouds and 3D geometry processing. It contains a rapidly growing library of state-of-the-art algorithms for filtering, feature estimation, surface reconstruction, registration, model fitting and segmentation, and more.

For the past few months, the PCL project has been growing in size, with more users and developers joining the project on a weekly basis. We're currently working hard on adding more functionality, while at the same time fixing bugs and improving the existing documentation.

To support its growing community, the PCL project got its own mailing list today: As PCL can be used independently of ROS, we hope that this new mailing list will be a good forum for integrating PCL with a variety of systems, as well as discussing the design and development of new state-of-the-art perception algorithms.

PCL also has a new, easier-to-remember URL: There are already many tutorials, FAQ, API documentation, and more.

PCL is still "unstable" as we learn how to best provide a useful API, but with your help and feedback, we hope to reach a stable release for ROS Diamondback.

The following REPs have been posted for comment.

To join in on the discussion, please sign up for ros-developers. You can also browse the archives.

ROS Enhancement Proposals (REPs)

| No Comments | No TrackBacks

ROS was started less than three years ago and much has changed in that short period of time. Now, with over 30 institutions contributing public, open source repositories and over 40 different robots supporting ROS, we recognize that we must make additional efforts to incorporate community feedback into the ongoing development of ROS.

We are adopting a new "ROS Enhancement Proposal" (REP) process to enable members of the community to propose, design, and develop new features for ROS and its core libraries. Thanks to the efforts of the Python community and it's PEP process, we were able to quickly bootstrap this new process.

The process is fairly straightforward. Anyone in the community can author a REP and circulate it to the ros-developers list. An index of REPs is stored in REP 0*, which defines the REP process.

We will soon circulate several REPs for public comment to provide information on current work with ROS. We will also circulate process- and information-type REPs to help better define the REP process.

We invite members of the community to review REP 1 and put together their own REPs for consideration.

* REP 1 is mostly a search-and-replace of PEP 1, so much credit to Barry Warsaw, Jeremy Hylton, David Goodger.

Scalable Object Recognition

| No Comments | No TrackBacks

cross-posted from

Marius Muja from University of British Columbia returned to Willow Garage this summer to continue his work object recognition. In addition to working on an object detector that can scale to a large number of objects, he has also been designing a general object recognition infrastructure.

One problem that many object detectors have is that they get slower as they learn new objects. Ideally we want a robot that goes into an environment and is capable of collecting data and learning new objects by itself. In doing this, however, we don't want the robot to get progressively slower as it learns new objects.

Marius worked on an object detector called Binarized Gradient Grid Pyramid (BiGGPy), which uses the gradient information from an image to match it to a set of learned object templates. The templates are organized into a template pyramid. This tree structure has low resolution templates at the root and higher resolution templates at each lower level. During detection, only a fraction of this tree must be explored. This results in big speedups and allows the detector to scale to a large number of objects.

Marius also worked on a recognition infrastructure that allows object detection algorithms to be dynamically loaded as C++ plugins. These algorithms can easily be combined together or swapped one for another. The infrastructure also allows very efficient data passing between the different algorithms using shared memory instead of data copying. This recognition infrastructure is useful for both users and researchers -- they can experiment with different algorithms and combine them together in a system without having to write additional code.

The code and documentation for the binarized gradient grid pyramid object detector (bigg_detector) and the recognition infrastructure (rein) are available on Slides from Marius' end-of-summer presentation can be viewed below or downloaded from

Robots Using ROS: PIXHAWK Helicopters

| No Comments | No TrackBacks

PIXHAWK is an open-source framework and middleware for micro air vehicles (MAVs) that focuses on computer vision. The framework is being developed by students at ETH Zurich, and they recently won second place the EMAV 2009 Indoor Autonomy Competition. The PIXHAWK software runs on several MAVs, including the PIXHAWK Cheetah Quadrotor and the Pioneer Coax Helicopter. The Cheetah Quadrotor was demoed at ECCV 2010 demonstrating stable autonomous flight using onboard computer vision and some interaction using ball tracking. A parts lists and assembly instructions for the Cheetah are available on the PIXHAWK web site.

The PIXHAWK middleware, MAVLink, runs on top of MIT's LCM middleware system and the PIXHAWK team has also integrated their system with ROS to provide access to tools like rviz. With rviz, PIXHAWK users can visualize a variety of 3D data from the MAVs, including pose estimates from the computer vision algorithms, as well as waypoints and IMU measurements. Other ROS processes can easily be interfaced with a PIXHAWK system.


The PIXHAWK team has also made their own open-source contributions to visualization tools for MAVs. Their QGroundControl mission planning tool provides a variety of visualizations, including real-time plotting of telemetry data. It was was initially developed for PIXHAWK-based systems, but now open to the whole MAV community.

The rest of the PIXHAWK software, including computer vision framework and flight controller software, is also available as open source. You can checkout their winter 2010 roadmap, which includes release of their ARTK hovering code base with ROS support.

The PIXHAWK team is also taking orders for a batch production run of their pxIMU Autopilot and Inertial Measurement Unit Board ($399). It provides a compact, integrated solution for those building their own quadrotors. The firmware is open source and compatible with the PIXHAWK software like QGroundControl.

Actionlib for roslua

| No Comments | No TrackBacks

Tim Niemueller has announced an actionlib implementation for his roslua client library

Hi ROS users.

We have released another piece of the Lua integration for ROS, this time it's actionlib_lua. It has been developed at Intel Labs Pittsburgh as part of my research stay this year working with Dr. Siddhartha Srinivasa on the Personal Robotics project. You can the source code at It requires the most recent version of roslua that you can get from

It implements most features of actionlib, both client and server side. Additionally it allows for some small optimizations, i.e. you can ignore the feedback and cancellation topics if not required or supported. It interacts well with the original actionlib for C++ and Python and we are using it on HERB.

As always, feedback is welcome,

Urbi Open Source Contest

| No Comments | No TrackBacks


Gostai is running an Urbi Open Source Contest from September 15 to December 15. Perhaps there's a ROS package or two that will give you a good head start?

Urbi Open Source Contest

We've previously featured Penn's AscTec quadrotors doing aggressive maneuvers; now you can see them out and about doing "Autonomous Multi-Floor Indoor Navigation with a Computationally Constrained MAV":

All of the computation is done onboard the 1.6Ghz Intel Atom processor and uses ROS for interprocess communication.

Credit: Shaojie Shen, Nathan Michael, and Vijay Kumar

Update: the GRASP lab also has the quadrotors running through thrown hoops:

On the heels of the Orocos RTT 2.0 release, Ruben Smits from KU Leuven has announced the first version of the new-and-improved Orocos/ROS integration

Hi orocos-dev, orocos-users, ros-users,

A first version of our Orocos/ROS integration is ready in the form of the orocostoolchainros stack.

The stack is available at:

  • released version:
    • tarball
    • svn: svn co orocos_toolchain_ros
  • developers version: svn co orocos_toolchain_ros

The stack contains all of the Orocos Toolchain v2.0.1 except for the autoproj build system. The orocos_toolchain_ros stack contains patched versions of orogen and utilmm to automatically create ros packages instead of autoproj packages for the automatic typekit generation for c++ classes.

On top of the Orocos Toolchain v2.0 this stack contains:

  • rtt_ros_integration: This package contains the following:
    • The ros-plugin: this RTT plugin allows Orocos/RTT components to contact the ROS master
    • CMake macro's to automatically create Orocos/RTT typekits and transport plugins from .msg files
  • rtt_ros_integration_std_msgs: This package shows how the CMake macro's have to be used, it creates the Orocos/RTT typekits and transport plugins for all roslib and std_msgs messages
  • rtt_ros_integration_example: This package shows how the rtt_ros_integration should be used from an Orocos/RTT user/developer point of view. It contains a HelloRobot component which can be contacted using rostopic echo

And last but not least the stack also includes the rtt_exercises package for Orocos/RTT new-commers.

The orocos_toolchain_ros stack itself still remains undocumented, I'm currently working on that, but documentation on the Orocos Toolchain can be found via: http://

If anyone has any questions please do not hesitate to contact the orocos-users, orocos-dev or ros-users mailing list.

Visual SLAM for ROS

| No Comments | No TrackBacks

crossposted from

Helen Oleynikova, a student at Olin College of Engineering, spent her summer internship at Willow Garage working on improving visual SLAM libraries and and integrating them with ROS. Visual SLAM is a useful building block in robotics with several applications, such as localizing a robot and creating 3D reconstructions of an environment.

Visual SLAM uses camera images to map out the position of a robot in a new environment. It works by tracking image features between camera frames, and determining the robot's pose and the position of those features in the world based on their relative movement. This tracking provides an additional odometry source, which is useful for determining the location of a robot. As camera sensors are generally cheaper than laser sensors, this can be a useful alternative or complement to laser-based localization methods.

In addition to improving the VSLAM libraries, Helen put together documentation and tutorials to help you integrate Visual SLAM with your robot. VSLAM can be used with ROS C Turtle, though it requires extra modifications to install as it requires libraries that are being developed for ROS Diamondback.

To find out more, check out the vslam stack on For detailed technical information, you can also check out Helen's presentation slides below.

NDI Polaris driver in kul-ros-pkg

| No Comments | No TrackBacks

Dominick Vanthlenen of kul-ros-pkg announced the release of a driver for NDI's Polaris (R) 3D measurement system along with packages for using the driver with both Orocos and ROS

For all of you having a Polaris (R) 3D measurement system: a ndi_hardware stack has been released! This enables you to use your measurement system on a Linux system and let it publish tf frames.


Jeff Rousseau announced a basic URDF model for the iRobot Create as well as new aptima-ros-pkg code repository

Hi all,

I've put together a basic URDF for the iRobot Create platform. It's available for download through our new svn repo:

svn checkout

(Note: currently it relies on the erratic_gazebo_plugins pkg to implement its diff-drive)

Comments, bug reports and patches are appreciated

I plan to add/fix in the not too distant future:

  • a working bumper
  • tweak mass/friction params to be more realistic (they're fudged at the moment)
  • fix intermittent 'wobble' when transitioning between translation and rotations (friction coeff issue?)


Group photo

The Willow Garage PR2 robots have been out at the PR2 Beta Sites for only a few short months and they have been busy with research projects, developing new software libraries for ROS, and creating Youtube hits. The first PR2 Beta Program Conference call was recently held to highlight this work, and the list of software that they have released as open source is already impressive.

A partial list of this software below so that all the ROS users and researchers can try it out and be involved. You'll find many more libraries in their public code repositories, and there is much more coming soon.

Georgia Tech

KU Leuven


  • EusLisp: now available under a BSD license
  • ROS/PR2 integration with EusLisp: roseus, pr2eus, and euscollada
  • jsk_ros_tools: includes rostool-alias-generator (e.g. rostopic_robot1) and jsk-rosemacs (support for anything.el)


  • knowrob: tools for knowledge acquisition, representation and reasoning
  • CRAM: reasoning and high level control for Steel Bank Common Lisp (cram_pl) and executive that reasons about locations (cram_highlevel)
  • prolog_perception: logically interact with perception nodes
  • pcl: contributions include pointcloud_registration, pcl_cloud_algos, pcl_cloud_tools, pcl_ias_sample_consensus, pcl_to_octree, mls



  • towel_folding: version of Towel Folding from pre-PR2 Beta Program that relies on two Canon G10 cameras mounted on chest. Uses optical flow for corner detection.
  • LDA-SIFT: recognition for transparent objects
  • Utilities:
    • pr2_simple_motions: Classes for easy scripting in Python of PR2 head, arms, grippers, torso, and base
    • visual_feedback: Streamlined image processing for 3d point extraction and capturing images
    • stereo_click: Click a point in any stereo camera feed and the corresponding 3d point is published
    • shape_window: Provides a highgui-based interface for drawing and manipulating 2D shapes.


  • iSAM: Incremental Smooth and Mapping released as LGPL.


  • OIT: Overhead interaction toolkit for tracking robots and people using an overhead camera.
  • deixis: Deictic gestures, such as pointing


  • articulation: (stable) Fit and select appropriate models for observed motion trajectories of articulated objects.
  • Contributions to pcl, including range image class and border extraction method


  • wviz: Web visualization toolkit to support their PR2 Remote Lab. Bosch has already been able to use their Remote Lab to collaborate with Brown University, and Brown University has released a rosjs to access ROS via a web browser.


Zeroconf package for ROS

| No Comments | No TrackBacks

I Heart Robotics has released a zeroconf package for ROS the enables advertising of ROS masters using Zeroconf/Avahi. This provides configuration-less setup to applications like I Heart Robotics's RIND and will also be a useful tool for multi-robot communication.

Robots Using ROS: Meka's Robots

| No Comments | No TrackBacks


Above: Meka bimanual robot using Meka A2 compliant arm and H2 compliant hand

Meka builds a wide-range of robot hardware targeted at mobile manipulation research in human environments. Meka's work was previously featured in the post on the mobile manipulator Cody from Georgia Tech, which uses Meka arms and torso.

Meka was started by Aaron Edsinger and Jeff Weber to capitalize on their experience building robots like Domo, which featured force-controlled arms, hands, and neck built out of series-elastic actuators. Meka's expertise with series-elastic actuators allows them to target their hardware at human-centered applications, where compact, lightweight, compliant, force-controlled hardware is desired. Georgia Tech's HRI robot Simon, which uses Meka torso, head, arms, and hands, has proportions similar to a 5'7" female.

meka_base01.jpgMeka initially built robot hands and arms, but is now transitioning into building all the components you need for a mobile manipulation platform. As Meka began to make this transition, they also started to transition to ROS. As a small startup company, they didn't have the resources to design and build the software drivers and libraries for a more complete mobile manipulation platform. They were also transitioning from a single real-time computer to using multiple computers, and they needed a middleware platform that would help them utilize this increased power.

One of Meka's new hardware products is the B1 Omni Base, which is getting close to completion. The B1 is based on the Nomadic XR4000 design and uses Holomni's powered casters. It is also integrated with the M3 realtime system and will have velocity, pose, and operational-space control available. The base houses a RTAI Ubuntu computer and can have up to two additional computers.

Meka is also designing two sensor heads that will be 100% integrated with ROS. The more fully-featured of the two will have five cameras, including Videre stereo, as well as a laser range finder, microphone array, and IMU. The tilting action of the head will enable to robot to use the laser rangefinder as a 3D sensor, in addition to the stereo.

The Meka software system consists of the Meka M3 control system coupled with ROS and other open-source libraries like Orocos' KDL. M3 is used to manage the realtime system and provide low-level GUI tools. ROS is used to provide visualizations and higher-level APIs to the hardware, such as motion planners that incorporate obstacle avoidance. ROS is also being used to integrate the two sensor heads that Meka has in development, as well as provide a larger set of hardware drivers so that customers can more easily integrate new hardware.

ROS is fully available with Meka's robots starting with last month's M3 v1.1 release. For lots of photos and video of Meka's hardware in action, see this Hizook post.

RL-Glue for ROS

| No Comments | No TrackBacks

Sarah Osentoski of Brown's RLAB recently announced a beta version of a ROS to RL-Glue bridge for reinforcement learning

Brown is pleased to announce our beta version of rosglue. rosglue is a bridge between ROS and RL-Glue, a standard reinforcement learning (RL) framework.

rosglue is designed to enable RL researchers and roboticists work together rather than having to reimplement existing methods in both fields. A goal of rosglue is to allow ROS users to use RL algorithms provided by RL researchers and, likewise, to allow RL researchers to more easily use robots running ROS as a learning environment. rosglue allows a robot running ROS to become an RL-Glue environment allowing RL-Glue compatible agents to control the robot. A high level visualization of the framework can be seen here.

rosglue uses a yaml configuration file to specify the topics and services and the learning problem. rosglue automatically subscribes to the topics and services specified in the file. rosglue sends actions selected to the RL-Glue to the robot using the appropriate topic or service. It then creates observations from specified topics for the RL-Glue agent.

rosglue is currently available for download from the brown-ros-pkg repository via:

svn co rosglue

and preliminary documentation can be found here:

Robot Learning and Autonomy @ Brown (RLAB)

Canonical and polar scan matcher packages

| No Comments | No TrackBacks

The CCNY Robotics Lab, which was recently featured in this CityFlyer blog post, has just announced the release of two packages for laser scan registration.

Dear ROS-Users,

The CCNY Robotics Lab is pleased to announce the release of two packages for laser scan registration. canonical_scan_matcher is a wrapper around Andrea Censi's "Canonical Scan Matcher" [1]. polar_scan_matching is a wrapper around Albert Diosi's "Polar Scan Matching" [2].

Both packages estimate the displacement of a robot by comparing consecutive Laser Scan messages. They can be used without providing any estimate for the displacement of the robot between the scans. In this way, they can serve as an odometric estimate for robots that don't have any other odometric system. Alternatively, a displacement estimate can be provided as input to the scan matchers, in the form of an Imu message or a tf transform, in order to produce better (or faster) scan matching results.

While the two scan matchers use different algorithms and parameters, the ROS wrappers are identical in terms of topics/frames/tf's, making the two packages interchangeable.

Documentation and usage instructions can be found at the respective wiki pages:

As usual, we have provided a small demo bag file with laser data and a launch file that can be used to view the packages in action. Each wiki page also has a video of what the output of the demo should look like.

We hope you find the scan matchers useful, and we extend our thanks to the authors of the original implementations.

Ivan Dryanovski
William Morris
The CCNY Robotics Lab

[1] A. Censi, "An ICP variant using a point-to-line metric" Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2008
[2] A. Diosi and L. Kleeman, "Laser Scan Matching in Polar Coordinates with Application to SLAM " Proceedings of 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, August, 2005, Edmonton, Canada


Skybotix is offering their CoaX helicopter complete with basic ROS setup so customers can use ROS right out of the box.

The CoaX helicopter is a micro UAV targeted at the research and educational markets. The small 320g helicopter includes an IMU, a downward-looking and three optional sideward-looking sonars, pressure sensor, color camera, and Bluetooth, XBee, or WiFi communication. In addition to two DSPs (dsPIC33), the CoaX has an optional Gumstix Overo computer that can run ROS. You can see more of the specs on their hardware wiki page.

Skybotix fully supports open source with the CoaX. The CoaX API, including low-level firmware and controller, is available open source under a GNU LGPL license. Their Gumstix Overo setup comes with a basic ROS installation. They include a ROS publisher for the CoaX state, a demo application for transmitting video data, and a GUI for visualizing both. Although the CoaX comes with minimal additional ROS libraries, there is a growing community of micro-UAV developers using ROS, including the micro-UAV-focused ccny-ros-pkg repository.

The CoaX was developed in collaboration with ETH Zurich. The Skybotix Youtube channel has videos of ETH Zurich student projects. Skybotix released recently a speed module for CoaX based on optical sensor, which enables indoor speed control as well as indoor hovering (video).

Find this blog and more at

Please submit content to be reviewed by emailing

About this Archive

This page is an archive of entries from September 2010 listed from newest to oldest.

August 2010 is the previous archive.

October 2010 is the next archive.

Find recent content on the main index or look in the archives to find all content.