December 2010 Archives

Taylor Veltrop has made the first entry to our ROS 3D Contest. He uses the Kinect, and NITE to put a Kondo-style humanoid through pushups, waves, and other arm-control gestures. Great work! We look forward to seeing more entries.

Hi everyone!

Please take a look at my entry in the Kinect/RGB-D contest! I'm really happy with how it's turned out so far.

It's a small humanoid hobby robot by Kondo with a Roboard running ROS. The arms are controlled master/slave style over the network by a Kinect.

Entry: Humanoid Teleoperation

Taylor Veltrop

You can watch an interview with Taylor about this project over at Robot Dreams.

Path Optimization by Elastic Band

| No Comments | No TrackBacks

Visiting scholar Christian Connette from Fraunhofer IPA has just finished up his projects here at Willow Garage. Christian works on the Care-O-bot 3 robot platform, which shares many ROS libraries in common with the PR2 robot. While he was here at Willow Garage, he worked on implementing an "elastic band" approach (Quinlan and Khatib) for the ROS navigation stack. You can watch the video above to find out more about this work, or checkout the slides below for more technical details (download PDF). The software is available as open source in the eband_local_planner package on ROS.org.

In the works: ScaViSLAM

| No Comments | No TrackBacks

For Kinect/OpenNI users and VSLAM researchers, we're working on integrating Hauke Strasdat's ScaViSLAM framework into ROS. ScaViSLAM is a a general and scalable framework for visual SLAM and should enable exciting applications like constructing 3D models of environments, creating 3D models of objects, augmented reality, and autonomous navigation.

We hope to release the ScaViSLAM library in Spring of 2011.

Happy Holidays from Lewis at Wash-U

| No Comments | No TrackBacks

C Turtle Update

| No Comments | No TrackBacks

This is a minor update mainly to deal with integration issues with PCL and Eigen.

OpenNI Updates

| No Comments | No TrackBacks

openni.pngDevelopment on our OpenNI/ROS integration for the Kinect and PrimeSense Developers Kit 5.0 device continues as a fast pace. For those of you participating in the contest or otherwise hacking away, here's a summary of what's new. As always, contributions/patches are welcome.

Driver Updates: Bayer Images, New point cloud and resolution options via dynamic_reconfigure

Suat Gedikli, Patrick Mihelich, and Kurt Konolige have been working on the low-level drivers to expose more of the Kinect features. The low-level driver now has access to the Bayer pattern at 1280x1024 and we're working on "Fast" and "Best" (edge-aware) algorithms for de-bayering.

We've also integrated support for high-resolution images from avin's fork, and we've added options to downsample the image to lower resolutions (QVGA, QQVGA) for performance gains.

You can now select these resolutions, as well as different options for the point cloud that is generated (e.g. colored, unregistered) using dynamic_reconfigure.

Here are some basic (unscientific) performance stats on a 1.6Ghz i7 laptop:

  • point_cloud_type: XYZ+RGB, resolution: VGA (640x480), RGB image_resolution: SXGA (1280x1024)
    • XnSensor: 25%, openni_node: 60%
  • point_cloud_type: XYZ+RGB, resolution: VGA (640x480), RGB image_resolution: VGA (640x480)
    • XnSensor: 25%, openni_node: 60%
  • point_cloud_type: XYZ_registered, resolution: VGA (640x480), RGB image_resolution: VGA (640x480)
    • XnSensor: 20%, openni_node: 30%
  • point_cloud_type: XYZ_unregistered, resolution: VGA (640x480), RGB image_resolution: VGA (640x480):
    • XnSensor: 8%, openni_node: 30%
  • point_cloud_type: XYZ_unregistered, resolution: QVGA (320x240)
    • XnSensor: 8%, openni_node: 10%
  • point_cloud_type: XYZ_unregistered, resolution: QQVGA (160x120)
    • XnSensor: 8%, openni_node: 5%
  • No client connected (all cases)
    • XnSensor: 0%, openni_node: 0%

NITE Updates: OpenNI Tracker, 32-bit support in ROS

Thanks to Kei Okada and the Tokyo University JSK Lab, the Makefile for the NITE ROS package properly detects your architecture (32-bit vs. 64-bit) and downloads the correct binary.

Tim Field put together a ROS/NITE sample called openni_tracker for those of you wishing to:

  1. Figure out how to compile OpenNI/NITE code in ROS
  2. Export the skeleton tracking as TF coordinate frames.

The sample is a work in progress, but hopefully it will give you all a head start.

Point Cloud → Laser Scan

Tully Foote and Melonee Wise have written a pointcloud_to_laserscan package that converts the 3D data into a 2D 'laser scan'. This is useful for using the Kinect with algorithms that require laser scan data, like laser-based SLAM.

OpenNI PCL

Radu Rusu is working on an openni_pcl package that will allow you to better use the Point Cloud Library with the OpenNI driver. This package currently contains a point cloud viewer as well as a nodelet-based launch files for creating a voxelgrid. More launch files are on the way.

New tf frames

There are new tf frames that you can use, which simplifies interaction in rviz (for those not used to Z-forward). The new frames also bring the driver in conformance with REP 103.

These frames are: /openni_camera, /openni_rgb_frame, /openni_rgb_optical_frame (Z forward), /openni_depth_frame, /openni_depth_optical_frame. (Z forward). For more info, see Tully's ros-kinect post.

Roadmap

We're getting close to the point where we will be breaking the ni stack up into smaller pieces. This will keep the main driver lightweight, while still enabling libraries to be integrated on top. We will also be folding more of PCL's capabilities soon.

Kinect-based Person Follower

| No Comments | No TrackBacks

Garratt Gallagher from CSAIL/MIT is at it again. Above, you can see his work on using the new OpenNI-based ROS drivers to get an iRobot Create to follow a person around. This code is based off of the skeleton tracker that comes with the NITE library.

For those of you figuring out how to get the NITE tracking data into ROS, take a look at Garratt's nifun package.

Neato XV-11 Driver for ROS, albany-ros-pkg

| No Comments | No TrackBacks

All,

I would like to announce the availability of a simple driver for the Neato Robotics XV-11 for ROS. The neato_robot stack contains a neato_driver (generic python based driver) and neato_node package. The neato_node subscribes to a standard cmd_vel (geometry_msgs/Twist) topic to control the base, and publishes laser scans from the robot, as well as odometry. The neato_slam package contains our current move_base launch and configuration files (still needs some work).

I've uploaded two videos thus far showing the Neato:

I also have to announce our repository, since we've never officially done that: albany-ros-pkg.googlecode.com

I hope to have documentation for this new stack on the ROS wiki later today/tonight.

Mike Ferguson
ILS Social Robotics Lab SUNY Albany

C Turtle Update: Now with Care-O-bot

| No Comments | No TrackBacks

Thumbnail image for cturtle_poster.jpgThis newest update of C Turtle now lets you get the various stacks necessary for running or simulating Fraunhofer IPA's Care-O-bot 3. PR2 and Care-O-bot have shared a dance together, and Care-O-bot is waving the Jolly Roger in the C Turtle poster, so we're excited to have the five Care-O-bot stacks join the official distribution. You can find out more on the care-o-bot wiki page.

This is also the first release of the experimental rosjava client for ROS. The existing client has been taken over by Lorenz Moesenlechner at TUM and has been updated to work with the upcoming Diamondback release.

Updates:

One More Thing... A Contest!

| No Comments | No TrackBacks

3Dturtle.jpg

We promised you some early holiday Kinect presents, and we hope you like what you've seen:

We've been impressed with what we've seen from you:

In fact, hacks based on ROS, PCL, or OpenCV took three out of the seven "best kinect hacks" on TechCrunch.

Now, it's time to take it up a notch.

ROS 3D Contest

We invite you to combine an RGB-D sensor (e.g. Kinect, PSDK5.0) with ROS to produce something new, interesting, and fun. It could be anything, from a novel robot control interface to a data-driven art project to pure computer vision. Bonus points if it's also useful (see below). We're offering $8K in prizes.

As we promote open-source software, we encourage participants to share, and our rules and judging will reward participants who act in the spirit of collaboration.

Rules

  • All entries must be open source using an OSI-approved license and be hosted on a publicly accessible server.
  • Entries must compatible with ROS and an RGB-D sensor (e.g. Kinect, PSDK5.0).
  • Videos must be Creative Commons licensed. In submitting the video, you give Willow Garage permission to include snippets of your video in a montage video (with attribution).
  • You can use whatever additional hardware you want. Entries that include custom hardware modifications are encouraged to post instructions for reproducing.
  • You may enter as many entries as you like and as early as you like.

Deadline: All entries must be submitted by January 23, 2011.

Helpful Links

Judging Criteria

We will be using the Iron Chef judging system to rate each entry:

Taste (10 pts)
Presentation (i.e. Documentation) (5 pts)
Originality (5 pts)

For the "taste" score, judges will take into account how well each entry uses the "secret ingredients" (ROS + RGB-D).

For the "originality" score, credit will go to the first entry (by date) to demonstrate a particular idea. Your entry can still be "original" even if subsequent entries use the same idea.

Prizes

  • First Place: $3000
  • Second Place: $2000
  • Third Place: $1000
  • "Most Useful": $2000

A special "Most Useful" prize will be awarded to the entry that provides the most useful, re-usable capabilities to the open-source community. As evidence of usefulness, judges will be biased towards entries that are used by other contest participants. This includes contributions to the underlying ROS drivers and OpenNI integration.

How to Enter

Contest Deadline: January 23, 2011

Entry page

Enter early, enter often!

openni.pngThis morning OpenNI was launched, complete with open source drivers for the PrimeSense sensor. We are happy to now announce that we have completed our first integrate with OpenNI, which will enable users to get point cloud data from a PrimeSense device into ROS. We have also enabled support for the Kinect sensor with OpenNI.

This new code is available in the ni stack. We have low-level, preliminary integration of the NITE skeleton and hand-point gesture library. In the coming days we hope to have the data accessible in ROS as well.

For more information, please see ros.org/wiki/ni.

NITE-1.png

The RGB-D project, a joint research effort between Intel Labs Seattle and the University of Washington Department of Computer Science & Engineering, has lots of demo videos on their site showing the various ways in which they have been using the PrimeSense RGB-D sensors in their work. These demos include 3D modeling of indoor environments, object recognition, object modeling, and gesture-based interactions.

In the video above, the "Gambit" chess-playing robot uses the RGB-D sensor to monitor a physical chessboard and play against a human opponent. And yes, that is the ROS rviz visualizer in the background.

More Videos/RGB-D Project Page

Garratt Gallagher from CSAIL/MIT has followed up his Kinect piano and hand detection hacks with a full "Minority Report" interface. The demo builds on the pcl library to do hand detection. You'll find Garratt's open-source libraries for building your own interface in mit-ros-pkg.

Update: MIT News release with more details

openni.pngPrimeSenseâ„¢ is launching the OpenNIâ„¢ organization, an open effort to help foster "Natural Interaction"â„¢ applications. As part of this effort, PrimeSense is releasing open source drivers for the RGB-D sensor that powers the Kinectâ„¢ and other devices such as PrimeSense's Development Kit 5.0 (PSDK 5.0) and are making the HW available for the OpenNI developers community! This will unlock full support for their sensor and also provide a commercially supported implementation. They are also releasing an open-source OpenNI API, which provides a common middleware for applications to access RGB-D sensors. Finally, they are releasing Windows and Linux binaries for the NITE skeleton-tracking library, which will enable developers to use OpenNI to create gesture and other natural-interaction applications. We at Willow Garage have been working with PrimeSense to help launch the open-source drivers and are happy to join PrimeSense in leading the OpenNI organization.

PrimeSense's RGB-D sensor is the start of a bright future of mass-market available 3D sensors for robotics and other applications. The OpenNI organization will foster and accelerate the use of 3D perception for human-computer/robot interaction, as well as help future sensors, libraries, and applications remain compatible as these technologies rapidly evolve.

For the past several weeks, we've been working with members of the libfreenect/OpenKinect community to provide open-source drivers, and we have already begun work to quickly integrate PrimeSense's contributions with these efforts. We will be using the full sensor API to provide better data for computer vision libraries, such as access to the factory calibration and image registration. We are also working on wrapping the NITEâ„¢ skeleton and handpoint tracking libraries into ROS. Having access to skeleton tracking will bring about "Minority Report" interfaces even faster. The common OpenNI APIs will also help the open-source community easily exchange libraries and applications that build on top. We've already seen many great RGB-D hacks -- we can't wait to see what will happen with the full power of the sensor and community unleashed.

This release was made possible by the many efforts of the open-source community. PrimeSense was originally planning on releasing these open-source drivers later, but the huge open-source Kinect community convinced them to accelerate their efforts and release now. They will be doing a more "formal" release in early 2011, but this initial access should give the developer community many new capabilities to play with over the holidays. As this is an early "alpha" release, we are still integrating the full capabilities and the ROS documentation is still being prepared. Stay tuned for some follow-up posts on how to start using these drivers and NITE with ROS.

PrimeSense's PSDK 5.0 is available separately and has several advantages for robotics: it is powered solely by USB, and the sensor package is smaller and lighter than the Kinect. This simplifies integration and will be important for use in smaller robots like quadrotors. PrimeSense is making a limited number of PrimeSense developer kits available for purchase. Please visit here to sign up to purchase the PSDK5.0.

You can visit OpenNI.org to find out more about the OpenNI organization and get binaries builds of these releases. Developers interested in working with the source code can checkout the repositories on GitHub and join the discussion groups at at groups.google.com/group/openni-dev. For more information about OpenNI, please visit OpenNI.org. To follow the efforts of the ROS community and Kinect, please join the ros-kinect mailing list.

OpenCV 2.2 Released

| No Comments | No TrackBacks

OpenCV_Logo_with_text.pngOpenCV 2.2 has been released. Major highlights include:

  • Reorganization into several, smaller modules to better separate different OpenCV functionality, as well as experimental vs. stable code.
  • A new (alpha) GPU acceleration module, created with the support of NVidia
  • Android support by Ethan Rublee.
  • New features2d unified framework for keypoint extraction, computing the descriptors and matching them.
  • LatentSVM object detector, contributed by Nizhniy Novgorod State University (NNSU) team.
  • Gradient boosting trees model has been contributed by NNSU team.
  • Experimental Qt backend for highgui by Yannick Verdie. (docs).
  • Chamfer matching algorithm has been contributed by Marius Muja, Antonella Cascitelli, Marco Di Stefano and Stefano Fabri. See samples/cpp/chamfer.cpp.
  • A lot more of OpenCV 2.x functionality is now covered by Python bindings. These new wrappers require numpy to be installed.
  • Over 300 issues have been resolved. Most of the issues (closed and still open) are listed at https://code.ros.org/trac/opencv/report/6.

For more information, please see the complete change log.

pattern

Kurt Konolige and Patrick Mihelich have prepared a technical overview of the Kinect calibration provided in the kinect_calibration package for ROS. For those of you wishing to understand the technology behind the PrimeSense sensor, this provides a detailed overview of how depth is calibration -- and how we go about providing the calibration necessary for perception algorithms.

From Patrick Bouffard in the Hybrid Systems Labs in the UC Berkeley EECS department

Hi all,

I wanted to share a video of something I've been working on:

Thanks to everyone who put together the kinect driver, the AscTec driver, PCL, and ROS in general--it made developing this fun and relatively painless!

Cheers,
Pat

Turtles Using ROS: Willie

| No Comments | No TrackBacks

IMG_6047.JPG

Meet Willie, Ben Cohen's Reeve's turtle of 16 years. Willie's research interests include autonomous navigation using search-based motion planning, and AUVs.

Photos

cturtle_poster.jpgOur latest C Turtle update has octomap_mapping and freiburg_tools stacks from the University of Freiburg (alufr-ros-pkg). We have also updated the kinect stack to include the latest calibration code and our kinect calibration tutorials has more instructions.

Robot View, now on the Android Market, lets you create panoramas and upload them to the web with your cellphone. Here's a panorama taken at Stanford, and here's another taken in the WG offices.

This isn't just a cool new app for your phone. It's a preview of what's coming for OpenCV and ROS. The panoramic stitching engine used in Robot View will soon be part of OpenCV. You'll also be able to use it in your own Android Apps, because OpenCV now runs on Android:

And, of course, OpenCV and ROS are part of the same happy family, so, yes, it will soon be coming to a robot near you.

Kinect Calibration: Code Complete

| No Comments | No TrackBacks

KinectCalRviz.png

KinectCalRviz2.png

Patrick Mihelich has just finished creating a kinect_calibration package for the ROS kinect stack. This calibration procedure takes advantage of the IR image access we added last week, plus the helpful discovery by Alex Trevor to use a halogen lamp to provide the necessary illumination for the IR image (in lieu of the IR projector).

Tomorrow we hope to do a release of the new stack with this calibration code as well as a proper tutorial. If you're really curious, you can try these barebone instructions.

Kinect Piano and Hand Detection

| No Comments | No TrackBacks

Garratt Gallagher from CSAIL/MIT has created a fun Kinect hack: Kinect Piano! You hold your hand steady in front of the Kinect and then move your fingers to play individual notes. You can find the code in the piano package in mit-ros-pkg.

This hack also comes with a library to help you create your own -- Garratt wrote a kinect_tools package that uses pcl to implement a hand detector, which he demonstrates below:

Kinect_iRobot_Create.jpg

Melonee Wise has put together a tutorial on Adding a Kinect to an iRobot Create, which we hope will help those of you interested using the Kinect on inexpensive platforms. It walks you through two different methods of powering the Kinect directly off the Create (thanks Sparkfun!).

Tutorial

Announcement from Stéphane Magnenat of ETH Zurich/ASL

Hello,

We have developed in-house drivers for EPOS/ELMO controllers, which are based on the CanOpen/CiA DSP 402 protocol.

These drivers are written in C, and we are currently in the process of re-factoring these libraries in C++, with the goal of doing a clean ROS driver afterwards. Initially, the library will contain implementations of subsets of both CanOpen and CiA DSP 402, with quirks for EPOS and ELMO. On the longer run, we will probably split the CanOpen part into another library and add more complete CiA DSP 402 support, including PDO.

I guess that this might be of interest for the ROS user community. We are committed to an open-development model and so contributions are very welcome.

Kind regards,

Stéphane

Repository Link

Announcement from Stefan Holzer

Hi all,

We've been working very hard to get PCL to work in Windows for the past few days and we are happy to announce the release of PCL 0.6! Because one of our friends and colleagues is turning 30 tomorrow, we decided to dedicate this release to him. Happy Birthday Bastian!

The 0.6 release brings:

  • preliminary Windows support
  • FLANN 1.6.2
  • Cminpack 1.1.0
  • portability fixes for MacOS
  • functionality patches and bug-fixes

Complete change-list

Thanks to all the contributors!

With this release, the core PCL libraries (dependencies and ROS wrappers) moved to the perception_pcl stack, while the visualization tools and tutorials moved to the perception_pcl_addons stack. No further development of PCL will be done in the old point_cloud_perception stack.

PS. Remember to bookmark http://www.pointclouds.org :)

Cheers,
Stefan

Find this blog and more at planet.ros.org.


Monthly Archives

About this Archive

This page is an archive of entries from December 2010 listed from newest to oldest.

November 2010 is the previous archive.

January 2011 is the next archive.

Find recent content on the main index or look in the archives to find all content.