March 2010 Archives

Rovio driver for ROS

| No Comments | No TrackBacks

rovio.jpgI Heart Robotics has released a rovio stack for ROS, which contains a controller, a joystick teleop node, and associated launch files for the WowWee Rovio. There are also instructions and configuration for using the probe package from brown-ros-pkg to connect to Rovio's camera.

You can download the rovio stack from iheart-ros-pkg:

http://github.com/IHeartRobotics/iheart-ros-pkg

As the announcement notes, this is still a work in progress, but this release should help other Rovio hackers participate in adding new capabilities.

Marvin is an autonomous car from Austin Robot Technology and the Department of Computer Science at The University of Texas at Austin. The modified 1999 Isuzu VehiCross competed in the 2007 DARPA Urban Challenge and was able to complete many of the difficult tasks presented to the vehicles, including merging, U-turns, intersections, and parking.

The team members for Marvin have a long history of contributing to open-source robotics software, including the Player project. Recently, Marvin team members have been porting their software to ROS. As part of this effort, they have setup the utexas-art-ros-pkg open-source code repository, which provides drivers and higher-level libraries for autonomous vehicles.

Like many Urban Challenge vehicles, Marvin has a Velodyne HDL lidar and Applanix Position and Orientation System for Land Vehicles (POS-LV). Drivers for both of these are available in the utexas-art-ros-pkg applanix package and velodyne stack, respectively. The velodyne stack also includes libraries for detecting obstacles and drive-able terrain, as well as tools for visualizing in rviz.

The Marvin team has also released an art_vehicle stack that provides the libraries that make Marvin go, including their navigation system. You can try it out with their simulator built on Stage.

marvin.JPG

Professor Peter Stone's group in the Department of Computer Science has been using Marvin to do multiagent research. You can learn about the algorithms used in the Urban Challenge in their paper, "Multiagent Interactions in Urban Driving". More recently, they have been doing research in "autonomous intersection management". This research is investigating a multiagent framework that can handle intersections for autonomous vehicles safely and efficiently. As you can see in the video above, these intersections for autonomous vehicles can handle far more vehicles than intersections designed for human-driven vehicles. For more information, you can watch a longer clip and read Kurt Dresner and Peter Stone's paper, "A Multiagent Approach to Autonomous Intersection Management"

Many people have contributed to the development of Marvin in the past. Current software development, including porting to ROS, is being led by Jack O'Quin and Dr. Michael Quinlan under the supervision of Professor Peter Stone.

Nao stack 0.11 released

| No Comments | No TrackBacks
nao_rviz.pngFrom Armin Hornung at University of Freiburg:

Version 0.11 of Freiburg's "nao" stack has just been released, providing a few minor fixes for v0.1. Files are available packaged at:
http://code.google.com/p/alufr-ros-pkg/downloads/list

or via source checkout from:
https://alufr-ros-pkg.googlecode.com/svn/trunk/nao

All code is fully compatible with ROS 1.0. On the robot side, this will probably be the last release compatible with the NaoQI API 1.3.17, as the new version 1.6.0 became available recently.

Additionally, extended documentation for all nodes in the stack is now available at http://www.ros.org/wiki/nao/.

Best regards,
Armin

Unstable release: slam_gmapping 1.1.0

| No Comments | No TrackBacks

slam_gmapping 1.1.0 has been released. As with all 1.1.x stacks, this release is unstable and not recommend for most users. This release adds a couple of features that help with autonomous exploration.

This is a good time to put in patches and requests for features to be added in the 1.1.x series: https://code.ros.org/trac/ros-pkg/newticket

Change lists

We've released several patch updates for our stacks. For Box Turtle, visualization and pr2_simulator has been updated. Several other development stacks have also been updated.

Change lists

HARK on Texai

| No Comments | No TrackBacks

When talking with people face-to-face, we may experience the "Cocktail Party Effect": even in a crowded, noisy room, we can use our binaural hearing to focus our listening on a single person speaking. With current telepresence technologies, however, we lose this important ability. Thankfully, there are already researchers giving us new tools for effectively bridging these remote distances.

Kyoto University's Professor Hiroshi Okuno and Assistant Professor Toru Takahashi, Honda Research Institute-Japan's Dr. Kazuhiro Nakadai, and four Kyoto University and Tokyo Institute of Technology graduate students spent a week at Willow Garage, integrating HARK with a Texai telepresence robot. HARK stands for Honda Research Institute-Japan (HRI-JP) Audition for Robots with Kyoto University. The robot audition system provides sound source localization, sound source separation, acoustic feature extraction, and automatic speech recognition.

The HARK system integrated well with ROS and our Texai. The Texai was outfitted with a green salad bowl helmet embedded with eight microphones, and there is now a hark package for ROS. Using this setup, their team put together three demos showing off the potential for telepresence technologies.

In the first demo, four people, including one present through a second Texai, talk over each other while the HARK-Texai separates out each voice. The second demo shows that sound is localized and that sound direction and power can be displayed in a radar chart. The final presentation puts these two demos together into a powerful new interface for the remote operator: the Texai pilot can determine where various sounds and voices are coming from, and select which sound to focus on. The HARK system then provides the pilot with the desired audio, cutting out any background noise or additional voices. Even in a crowded room, you can have a one-on-one conversation.

HARK came out of close collaboration between HRI-JP and Kyoto University, and Professor Okuno's passion to make computer/robot audition helpful for the hearing impaired. HARK is provided free and open source for research purposes and can be licensed for commercial applications.

Hizook on Robotis servos and ROS

| No Comments | No TrackBacks

dynamixel.jpg

Travis Deyle of Hizook/Healthcare Robotics Lab posted a nice overview of Robotis Dynamixel servos, including code for using the servos with gt-ros-pkg's robotis package. The Healthcare Robotics Lab uses these servos to construct their tilting Hokuyo laser rangefinder. You'll also find them in robots like Prairie Dog for their Crustcrawler arm.

Article

What's Coming Up: Web interface

| No Comments | No TrackBacks

One of the new libraries we're working on for ROS is a web infrastructure that allows you to control the robot and various applications via a normal web browser. The web browser is a powerful interface for robotics because it is ubiquitous, especially with the availability of fully-featured web browsers on smart phones.

The web_interface stack for ROS allows you to connect to a Web-enabled ROS robot, see through its cameras, and launch applications. Beneath the hood is a Javascript library that is capable of sending and receiving ROS messages, as well as calling ROS services.

With just a couple of clicks from any web browser, you can start and calibrate a robot. We've written applications for basic capabilities like joystick control and tucking the arms of the PR2 robot. We've also written more advanced applications that let you select the locations of outlets on a map for the PR2 to plug into. We hope to see many more applications available through this interface so that users can control their robot easily with any web-connected device.

We're also developing 3D visualization capabilities based on the O3D extension that is available with upcoming versions of Firefox and Chrome. This 3D visualization environment is already being tested as a user interface for grasping objects that the robot sees.

All of these capabilities are still under active development and not recommended for use yet, but we hope that they will become useful platform capabilities in future releases.

Over the past month we've been working hard on PR2's capability to autonomously recharge itself. We developed a new vision-based approach for PR2 to locate a standard outlet and plug itself in, and integrated this capability into a web application you can run from your browser. We've developed this code to be compatible with ROS Box Turtle.

The following stacks have been released for Box Turtle:

Change lists

The following stack have been updated:

These updates are mostly compatibility fixes for ROS 1.1 and minor bug fixes.

Change lists

ROS 1.1.1 Released

| No Comments | No TrackBacks

ROS

ROS 1.1 has been released. This is an unstable development release to test and develop features for ROS 1.2. This comes quickly on the heals of the 1.1 release yesterday due to a bug in the new service generators. It also has some other miscellaneous changes.

Change list

  • rosmaster: subscribers now declare topic types, though they cannot overriding existing type declarations (r8745)
  • rosmaster: roslib.masterapi: Added getTopicTypes() to support subscribers declaring types (r8745)
  • rospy: bug fix to on_shutdown() (#2536 r8746)
  • rospy: removed support for deprecated /time topic
  • rostopic: support for subscriber-declared types (uses new getTopicTypes()) Master method
  • roslib: Links against rt to fix linking with the gold linker
  • roscpp: Fixed service generators to work with concurrent builds
  • roscpp: Removed support for deprecated /time topic

    -- your friendly neighborhood ROS development team

Robots Using ROS: Bosch RTC's Robot

| No Comments | No TrackBacks

Bosch's Research and Technology Center (RTC) has a Segway-RMP based robot that they have been using with ROS for the past year to do exploration, 3D mapping, and telepresence research. They recently released version 0.1 of their exploration stack in the bosch-ros-pkg repository, which integrates with the ROS navigation stack to provide 2D-exploration capabilities. You can use the bosch_demos stack to try this capability in simulation.

segway_rtc.640w.jpgThe RTC robot uses:

  • 1 Mac Mini
  • 2 SICK scanners
  • 1 Nikon D90
  • 1 SCHUNK/Amtec Powercube pan-tilt unit
  • 1 touch screen monitor
  • 1 Logitech webcam
  • 1 Bosch gyro
  • 1 Bosch 3-axis acceleromoter

Like most research robots, it's frequently reconfigured: they added an additional Mac mini, Flea camera, and Videre stereo camera for some recent work with visual localization.

Bosch RTC has been releasing drivers and libraries in the bosch-ros-pkg repository. They will be presenting their approach for mapping and texture reconstruction at ICRA 2010 and hope to release the code for that as well. This approach constructs a 3D environment using the laser data, fits a surface to the resulting model, and then maps camera data onto the surfaces.

Researchers at Bosch RTC were early contributors to ROS, which is remarkable as bosch-ros-pkg is the first time Bosch has ever contributed to an open source project. They have also been involved with the ros-pkg repository to improve the SLAM capabilities that are included with ROS Box Turtle, and they have been providing improvements to a visual odometry library that is currently in the works.

ROS 1.1 Released (Unstable)

| No Comments | No TrackBacks

ROS_1.1_640w.png

ROS 1.1 has been released. This is an unstable development release to test and develop features for ROS 1.2 (see version policy). Install at your own risk.

firebreak.jpg

Major changes:

  • C++: There are major changes to roscpp, including a new message serialization API, custom allocator support, subscription callback types, non-const subscriptions, and fast intraprocess message passing. These changes are backwards-compatible, but require further testing and API stabilization.

  • Python: The rospy client library is largely unchanged, though the ROS Master has been moved to a separate package. There are many internal API changes being made to the Python code base to support the rosh scripting environment under active development.

For future development plans for the ROS 1.1.x branch, please see the ROS roadmap.

Please see the full changelist for a description of the many updates in this release.

ele.jpg

The Healthcare Robotics Lab focuses on robotic manipulation and human-robot interaction to research improvements in healthcare. Researchers at HRL have been using ROS on EL-E and Cody, two of their assistive robots. They have also been publishing their source code at gt-ros-pkg.

HRL first started using ROS on EL-E for their work on Physical, Perceptual, and Sematic (PPS) tags (paper). EL-E has a variety of sensors and Katana arm mounted on a Videre ERRATIC mobile robot base. The video below shows off many of EL-E's capabilities, including a laser pointer interface -- people select objects in the real-world for the robot to interact with using a laser pointer.

HRL does much of their research work in Python, so you will find Python-friendly wrappers for much of EL-E's hardware, including the Hokuyo UTM laser rangefinder, Thing Magic M5e RFID antenna, and Zenither linear actuator. You can also get CAD diagrams and source code for building your own tilting Hokuyo 3D scanner.

HRL also has a new robot, Cody, which you can see in the video below:

Update: you can read more on Cody at Hizook.

The end effector and controller are described in the paper, "Pulling Open Novel Doors and Drawers with Equilibrium Point Control" (Humanoids 2009). They've also published the CAD models of the end effector and the source code can be found in the 2009_humanoids_epc_pull ROS package.

Whether it's providing open source drivers for commonly used hardware, CAD models of their experimental hardware, or source code to accompany their papers, HRL has embraced openness with their research. For more information:

What's Coming Up: Arm navigation

| No Comments | No TrackBacks

arm_navigation.png

Efforts are underway to develop a navigation stack for the arms analogous to the navigation stack for mobile bases. This includes a wide range of libraries that can be used for collision detection, trajectory filtering, motion planning for robot manipulators, and specific implementations of kinematics for the PR2 robot.

As part of this effort, Willow Garage is happy to announce the release of our first set of research stacks for arm motion planning. These stacks include a high-level arm_navigation stack, as well as general-purpose stacks called motion_planners, motion_planning_common, motion_planning_environment, motion_planning_visualization, kinematics, collision_environment, and trajectory_filters. There are also several PR2-specific stacks. All of these stacks can be installed on top of Box Turtle:

Installation Page

Significant contributions were made to this set of stacks by our collaborators and interns over the past two years:

  • Ioan Åžucan (from Lydia Kavraki's lab at Rice) developed the initial version of this framework while an intern at Willow Garage over the summer of 2008 and 2009 and has continued to contribute significantly since. His contributions include the OMPL planning library that contains a variety of probabilistic planners including ones developed by Lydia Kavraki's lab over the years.
  • Maxim Likhachev's group at Penn (including Ben Cohen, who was a summer intern at Willow Garage in 2009) contributed the SBPL planning library that incorporates the latest techniques in search based motion planning.
  • Mrinal Kalakrishnan from USC developed the CHOMP motion planning library while he was an intern at Willow Garage in 2009. This library is based on the work of Nathan Ratliff, Matthew Zucker, J. Andrew Bagnell and Siddhartha Srinivasa.

Additional contributions also came from Radu Rusu, Matei Ciocarlei and Kaijen Hsiao (from Willow Garage) and Rosen Diankov (from CMU).

These stacks are currently classified as research stacks, which means that they have unstable APIs and are expected to change. We expect the core libraries to reach maturity fairly quickly and be released as stable software stacks, while other stacks will continue to incorporate the latest in motion planning research from the world-wide robotics community. We encourage the community to try them out to provide feedback and contribute. A good starting point for is the arm_navigation wiki page. There is also a growing list of tutorials.

Here are some blog posts of demos that show these stacks in use:

  1. JSK demo (pr2_kinematics)
  2. Robot replugged (pr2_kinematics)
  3. Hierarchical planning (OMPL, move_arm)
  4. Towers of Hanoi (move_arm)
  5. Detecting tabletop objects (move_arm)

You can also watch the videos below that feature the work of Ben Cohen, Mrinal Kalakrishnan, and Ioan Åžucan.

The individual stack and package wiki pages have descriptions of the current status. We have undergone a ROS API review for most packages, but the C++ APIs have not yet been reviewed. We encourage you to use the ROS API -- we will make our best effort to keep this API stable. The C++ APIs are being actively worked on (see the Roadmap on each Wiki page for more details) and we expect to be able to stabilize a few of them in the next release cycle.

Please feel free to point out bugs, make feature requests, and tell us how we can do better. We particularly encourage developers of motion planners to look at integrating their motion planners into this effort. We have made an attempt to modularize the architecture of this system so that components developed by the community can be easily plugged in. We also encourage researchers who may use these stacks on other robots to get back to us with feedback about their experience.

Best Regards,

Your friendly neighborhood arm navigation development team

Sachin Chitta, Gil Jones (Willow Garage)
Ioan Sucan (Rice University)
Ben Cohen (University of Pennsylvania)
Mrinal Kalakrishnan (USC)

Videos

Ioan Åžucan, OMPL (blog post):

Ben Cohen, SBPL (blog post):

Mrinal Kalakrishnan, CHOMP (blog post):

What's Coming Up: PR2 Calibration

| No Comments | No TrackBacks

In recent posts, we've showcased the rviz 3-D visualizer and navigation stack, two of the many useful libraries and tools included in the ROS Box Turtle distribution. Now, we'd like to highlight what we're developing for future distribution releases.

The first is the PR2 calibration stack. The PR2 has two stereo cameras, two forearms cameras, one high-resolution camera, and both a tilting and base laser rangefinder. That's a lot of sensor data to combine with the movement of the robot.

The PR2 calibration stack recently made our lives simpler when updating our plugging-in code. Eight months ago, without accurate calibration between the PR2 kinematics and sensors, the original plugging-in code applied a brute force spiralling approach to determining an outlet's position. Our new calibration capabilities give the PR2 the new-found ability to plug into an outlet in one go.

The video above shows how we're making calibration a simpler, more automated process for researchers. The PR2 robot can calibrate many of its sensors automatically by moving a small checkerboard through various positions in front of its sensors. You can start the process before lunch, and by the time you get back, there's a nicely calibrated robot ready to go. We're also working on tools to help researchers understand how well each individual sensor is calibrated.

The PR2 calibration stack is still under active development, but can be used by PR2 robot users with Box Turtle. In the future, we hope this stack will become a mature ROS library capable of supporting a wide variety of hardware platforms.

Robots Using ROS: JSK's Kawada HPR2-V

| No Comments | No TrackBacks

HRP-2V.640w.jpg

The Kawada HRP-2V is a variant of the HRP-2 "Promet" robot. It uses the torso, arms, and sensor head of the HRP-2, but it is mounted to an omni-directional mobile base instead of the usual humanoid legs. The JSK Lab at Tokyo University uses this platform for hardware and software research.

In May of 2009 at the ICRA conference, the HRP-2V was quickly integrated with the ROS navigation stack as a collaboration between JSK and Willow Garage. Previously, JSK had spent two weeks at Willow Garage integrating their software with ROS and the PR2. ICRA 2009 was held in Kobe, Japan, and Willow Garage had a booth. With laptops and the HRP-2V setup next to the booth, JSK and Willow Garage went to work getting the navigation stack on the HRP-2V. By the end of the conference, the HRP-2V was building maps and navigating the exhibition hall.

Logging and playback is one of the most critical features when developing software for robotics. Whether you're a hardware engineer recording diagnostic data, a researcher collecting data sets, or a developer testing algorithms, the ability to record data from a robot and play it back is crucial. ROS supports these needs with "Bag files" and tools like rosbag and rxbag.

rosbag can record data from any ROS data source into a bag file, whether it be simple text data or large sensor data, like images. The tool can also record data from programs running on multiple computers. rosbag can play back this data just as it was recorded -- it looks identical. This means that data can be recorded from a physical robot and used to create virtual robots to test software. With the appropriate bag file, you don't even need to own a physical robot.

There are a variety of tools to help you manage your stored data, such as tools for filtering data and updating data formats. There are also tools to manage research workflows for sending bag files to Amazon's Mechanical Turk for dataset labeling. For data visualization, there is the versatile rxbag tool. rxbag can scan through camera data like a movie editor, plot numerical data, and view the raw message data. You can also write plugins to define viewers for your own data types.

You can watch the video to see rosbag and rxbag in action. Both of these tools are part of the ROS Box Turtle release, which you can download from ROS.org.

Several stacks have transitioned to 1.1 unstable development branches: common_msgs 1.1.1, image_pipeline 1.1.0, and vision_opencv 1.1.0. These are unstable development releases: it is recommended for users to keep using the 1.0 branch in Box Turtle. Our stack version policy describes our general development model for stable and unstable releases.

common 1.1

The primary part of this release is the addition of the nodelet package. There was discussion on the ros-users mailing list earlier about how this can be used as an optimization. If you are interested please take a look at the documentation or code. Feedback is always appreciated.

image_pipeline 1.1

The major new features are in stereo_image_proc. It takes advantage of improvements to the OpenCV stereo block matcher, particularly to reduce fringe artifacts at edges. For comparison, the reference implementation of stereo block matching (on which OpenCV's was based) can be used instead, but they should now give roughly the same results. stereo_image_proc now advertises a separate point cloud topic using the new and improved PointCloud2 message type.

Change lists

Several stacks have been updated with patches for Box Turtle.

pr2_calibration, which is still in unstable development, was also updated to 0.3.0.

Change lists

prairiedog_cups.640w.jpg

Like the Aldebaran Nao, the "Prairie Dog" platform from the Correll Lab at Colorado University is an example of the ROS community building on each others' results, and the best part is that you can build your own.

Prairie Dog is an integrated teaching and research platform built on top of an iRobot Create. It's used in the Multi-Robot Systems course at Colorado University, which teaches core topics like locomotion, kinematics, sensing, and localization, as well as multi-robot issues like coordination. The source code for Prairie Dog, including mapping and localization libraries, is available as part of the prairiedog-ros-pkg ROS repository.

Prairie Dog uses a variety of off-the-shelf robot hardware components: an iRobot Create base, a 4-DOF CrustCrawler AX-12 arm, a Hokuyo URG-04LX laser rangefinder, a Hagisonic Stargazer indoor positioning system, and a Logitech QuickCam 3000. The Correll Lab was able to build on top of existing ROS software packages, such as brown-ros-pkg's irobot_create and robotis packages, plus contribute their own in prairiedog-ros-pkg. Prairie Dog is also integrated with the OpenRAVE motion planning environment.

Starting in the Fall of 2010, RoadNarrows Robotics will be offering a Prairie Dog kit, which will give you all the off-the-shelf components, plus the extra nuts and bolts. Pricing hasn't been announced yet, but the basic parts, including a netbook, will probably run about $3500.

For more information, please see:

IMG_5654.JPG

Photo: Prairie Dogs busy creating maps for kids and parents

cob3_irex_640w.jpg

The Care-O-bot 3 is a mobile manipulation robot designed by Fraunhofer IPA that is available both as a commercial robotic butler, as well as a platform for research. The Care-O-bot software has recently been integrated with ROS, and, in just short period of time, already supports everything from low-level device drivers to simulation inside of Gazebo.

The robot has two sides: a manipulation side and an interaction side. The manipulation side has a SCHUNK Lightweight Arm 3 with SDH gripper for grasping objects in the environment. The interaction side has a touchscreen tray that serves as both input and "output". People can use the touchscreen to select tasks, such as placing drink orders, and the tray can deliver objects to people, like their selected beverage.

The goals of the Care-O-bot research program are to:

  • provide a common open source repository for the hardware platform
  • provide simulation models of hardware components
  • provide remote access to the Care-O-bot 3 hardware platform

Those first two goals are supported by the care-o-bot open source repository for ROS, which features libraries for drivers, simulation, and basic applications. You can easily download the source code and perform a variety of tasks in simulation, such as driving the base and moving the arm. These support the third goal of providing remote access to physical Care-O-Bot hardware via their webportal.

cob3_tech_specs.640w.jpg

For sensing, the Care-O-bot uses two SICK S300 laser scanners, a Hokuyu URG-04LX laser scanner, two Pike F-145 firewire cameras for stereo, and Swissranger SR3000/SR4000s. The cob_driver stack provides ROS software integration for these sensors.

The Care-O-bot runs on a CAN interface with a SCHUNK LWA3 arm, SDH gripper, and a tray mounted on a PRL 100 for interacting with its environment. It also has a SCHUNK PW 90 and PW 70 pan/tilt units, which give it the ability to bow through its foam outer shell. The CAN interface is supported through several Care-O-bot ROS packages, including cob_generic_can and cob_canopen_motor, as well as wrappers for libntcan and libpcan. The SCHUNK components are also supported by various packages in the cob_driver stack.

The video below shows the Care-O-bot in action. NOTE: as the Care-O-bot source code is still being integrated with ROS, the capabilities you see in the video are not part of the ROS repository.

Stanford_Junior.640w.jpg

Junior is the Stanford Racing team's autonomous car that most famously finished in a close second at the DARPA Urban Challenge. It successfully navigated a difficult urban environment that required obeying traffic rules, parking, passing and many other challenges of real-world driving.

Those of you familiar with Junior are probably saying, "Junior doesn't use ROS! It uses IPC!"

That's mostly true, but researchers have recently started using ROS-based perception libraries in Junior's obstacle classification system.

From the very start, one of the goals of ROS was to keep libraries small and separable so that you could use as little, or as much, as you want. In the case of the tiny i-Sobot, a developer was able to just use ROS's PS3 joystick driver. When frameworks get too large, they becomes much more difficult to integrate with other systems.

In the case of Junior, Alex Teichman was able to bring his image descriptor library for ROS onto Junior. He has been using this library, along with ROS point cloud libraries, to develop Junior's obstacle classification system. Other developers on the team will also be allowed to choose ROS for their programs where appropriate.

You can find out more about Alex's image descriptor library at ros.org/wiki/descriptors_2d.

In addition to core robotics libraries, like navigation, the ROS Box Turtle release also comes with a variety of tools necessary for developing robotics algorithms and applications. One of the most commonly used tools is rviz, a 3-D visualization environment that is part of the ROS Visualization stack.

Whether it's 3D point clouds, camera data, maps, robot poses, or custom visualization markers, rviz can display customizable views of this information. rviz can show you the difference between the physical world and how the robot sees it, and it can also help you create displays that show users what the robot is planning to do.

You can watch the video above for more details about what the ROS rviz tool has to offer, and you can read documentation and download the source code at: ros.org/wiki/rviz.

Robots Using ROS: Aldebaran Nao

| No Comments | No TrackBacks

The Aldebaran Nao is a commercially available, 60cm tall, humanoid robot targeted at research lab and classrooms. The Nao is small, but it packs a lot into its tiny frame: four microphones, two VGA cameras, touch sensors on the head, infrared sensors, and more. The use of Nao with ROS has demonstrated how quickly open-source code can enable a community to come together around a common hardware platform.

rvizThe first Nao driver for ROS was released by Brown University's RLAB in November of 2009. This initial release included head control, text-to-speech, basic navigation, and access to the forehead camera. Just a couple of days later, the University of Freiburg's Humanoid Robot Lab used Brown's Nao driver to develop new capabilities, including torso odometry and joystick-based tele-operation. Development didn't stop there: in December, the Humanoid Robot Lab put together a complete ROS stack for the Nao that added IMU state, a URDF robot model, visualization of the robot state in rviz, and more.

The Nao SDK already comes with built-in support for the open-source OpenCV library. It will be exciting to see what additional capabilities the Nao will gain now that it can be connected to the hundreds of different ROS packages that are freely available.

Brown is also using open source and ROS as part of their research process:

Publishing our ROS code as well as research papers is now an integral part of disseminating our work. ROS provides the best means forward for enabling robotics researchers to share their results and more rapidly advance the state-of-the-art.

-- Chad Jenkins, Professor, Brown University

The University of Freiburg's Nao stack is available on alufr-ros-pkg. Brown's Nao drivers are available on brown-ros-pkg, along with drivers for the iRobot Create and a Gstream-based webcam driver.

There have been several new releases containing patches, new features, and unstable updates.

Two of our ROS stacks have gone to their "unstable" release cycle: common_msgs and vision_opencv. We will be using this release cycle to develop new features for the next stable release.

For our patch releases, web_interface and and laser_drivers have both been updated with bug fixes.

pr2_calibration, which is an unstable library still under active development, has reached it's 0.3.0 release.

Change lists

Robots Using ROS: i-Sobot

| No Comments | No TrackBacks

ROS is starting to gain traction in Japan thanks to some dedicated early adopters and community-based translation efforts. Last year, the ROS Navigation stack was ported to Tokyo University's Kawada HRP2-V robot, and now it's finding use with hobby robots as well.

ROS libraries are designed to be small and easily broken apart. In this case, a small use of ROS has led to the claim of "smallest humanoid robot controlled by ROS." As the video explains, ROS isn't running on the robot. The i-Sobot is hooked up to an Arduino, which talks to a PC, which uses the ROS PS3 joystick driver. We're always thrilled to see code being reused, whether it's something as big as the ROS navigation stack, or something as small as a PS3 joystick driver.

The video and demo was put together by "Ogutti", who has been maintaining a Japanese blog on ROS at ros-robot.blogspot.com/. Most recently, he has been blogging about using the Care-O-bot 3 simulation libraries.

In addition to Ogutti's Japanese ROS blog, you can go to ros.org/wiki/ja to follow the progress of the Japanese translation efforts for the ROS documentation.

What's in the Box: Navigation

| No Comments | No TrackBacks

Now that the ROS Box Turtle release is out, we'd like to highlight some of its core capabilities, and share some of the features that are in the works for the next release.

First up is the ROS Navigation stack, perhaps the most broadly used ROS library. The Navigation stack is in use throughout the ROS community, running on robots both big and small. Many institutions, including Stanford University, Bosch, Georgia Tech, and the University of Tokyo, have configured this library for their own robots.

The ROS Navigation stack is robust, having completed a marathon -- 26.2 miles -- over several days in an indoor office environment. Whether the robot is dodging scooters or driving around blind corners, the Navigation stack provides robots with the capabilities needed to function in cluttered, real-world environments.

You can watch the video above for more details about what the ROS Navigation stack has to offer, and you can read documentation and download the source code at: ros.org/wiki/navigation.

Robots Using ROS: STAIR 1

| No Comments | No TrackBacks

stair_april2007_small.jpgWith so many open-source repositories offering ROS libraries, we'd like to highlight the many different robots that ROS is being used on. It's only fitting that we start where ROS started with STAIR 1: STanford Artificial Intelligence Robot 1. Morgan Quigley created the Switchyard framework to provide a robot framework for their mobile manipulation platform, and it was the lessons learned from building software to address the challenges of mobile manipulation robots that gave birth to ROS.

Solving problems in the mobile manipulation space is too large for any one group. It requires multiple teams tackling separate challenges, like perception, navigation, vision, and grasping. STAIR 1 is research robot built to address these challenges: a Neuronics Katana Arm, a Segway base, and an ever-changing array of sensors, including a custom laser-line scanner, Hokuyo laser range finder, Axis PTZ, and more. The experience developing for this platform in a research environment provided many lessons for ROS: small components, simple reconfiguration, lightweight coupling, easy debugging, and scalable.

STAIR 1 has tackled a variety of research challenges, from accepting verbal commands to locate staplers, to opening doors, to operating elevators. You can watch the video of STAIR 1 operating an elevator below, and you can watch more videos and learn more about the STAIR program at stair.stanford.edu. You can also read Morgan's slides on ROS and STAIR from an IROS 2009 workshop.

In addition to the many contribution made to the core, open-source ROS system, you can also find STAIR-specific libraries at sail-ros-pkg.sourceforge.net/, including the code used for elevator operation.

Box Turtle

We are excited to announce that our first ROS Distribution, "Box Turtle", is ready for you to download. We hope that regular distribution releases will increase the stability and adoption of ROS.

A ROS Distribution is just like a Linux distribution: it provides a broad set of libraries that are tested and released together. Box Turtle contains the many 1.0 libraries that were recently released. The 1.0 libraries in Box Turtle have stable APIs and will only be updated to provide bug fixes. We will also make every effort to maintain backwards-compatibility when we do our next release, "C Turtle".

There are many benefits of a distribution. Whether you're a developer trying to choose a consistent set of ROS libraries to develop against, an instructor needing a stable platform to develop course materials against, or an administrator creating a shared install, distributions make the process of installing and releasing ROS software a more straightforward process. The Box Turtle release will allow you to easily get bug fixes, without worry about new features breaking backwards compatibility.

With this Box Turtle release, installing ROS software is easier than ever before with Ubuntu debian packages. You can now "apt-get install" ROS and its many libraries. We've been using this capability at Willow Garage to install ROS on all of our robots. Our plugs team was able to write their revamped code on top of the Box Turtle libraries, which saved them time and provided greater stability.

Box Turtle includes two variants: "base" and "pr2". The base variant contains our core capabilities and tools, like navigation and visualization. The pr2 variant comes with the libraries we use on our shared installs for the PR2, as well as the libraries necessary for running the PR2 in simulation. Robots vary much more than personal computers, and we expect that future releases will include variants to cover other robot hardware platforms.

We have new installation instructions for ROS Box Turtle. Please try them out and let us know what you think!

The following stacks have been updated with bug fixes:

Change lists

Find this blog and more at planet.ros.org.


Monthly Archives

About this Archive

This page is an archive of entries from March 2010 listed from newest to oldest.

February 2010 is the previous archive.

April 2010 is the next archive.

Find recent content on the main index or look in the archives to find all content.