Results tagged “Kinect”

SV-ROS's team Maxed-Out earns the highest score at IROS 2014 in the first Microsoft Kinect Challenge.

The Microsoft Kinect Challenge is a showcase of BRIN (Benchmark Indoor Robot Navigation), a scoring software that was used to score the competition. Each team had to create a mapping and autonomous navigation software solution that would successfully run on a provided Adept Pioneer 3DX robot

The number of way points achieved, time and accuracy are combined in determining a contestant's score. Microsoft Research's Gershon Parent, the author of the BRIN scoring software hopes to see BRIN as a universally accepted way of benchmarking autonomous robots' indoor navigation ability. 

SV-ROS is a Silicon Valley ROS users group that meets on the second to last Wednesday each month at the HackerDojo in Mountain View, CA. Team Maxed-Out is led by Greg Maxwell; key team members are Girts Linde, Ralph Gnauck, Steve Okay, and Patrick Goebel. The Maxed-Out effort began in May 2014 and was able to successfully create a winning ROS mapping localization and navigation solution in a few months, beating 5 other international teams. 

Maxed-Out's winning software solution was based on the ROS Hydro distribution on a powerful GPU enabled laptop running Ubuntu 12.04 and Nvidia Cuda 6.0 8 GPU parallel processing software. The team was able to out score all the other teams by incorporating the Rtabmap mapping, localization, navigation and new Point Cloud solution library that is the effort of Mathieu Labbe, a graduate student at the Université de Sherbrooke.

Team Maxed-Out's code is up at SV-ROS's Github repository and documented on this meetup page.

Pictures of the event are posted here

Microsoft Kinect v2 Driver Released

From Thiemo and Alexis via ros-users@

Dear ROS Community,

I am Thiemo from the Institute for Artificial Intelligence at the University of Bremen. I am currently a PhD Student under the supervision of Prof. Michael Beetz. I'm writing this together with Alexis Maldonado, another PhD Student at our lab, who has helped mainly with the hardware aspects.

In the past few months I developed a toolkit for the Kinect v2 including: a ROS interface to the device (driver) using libfreenect2, an intrinsics/extrinsics calibration tool, an improved depth registration method using OpenCL, a lightweight pointcloud/images viewer based on the PCL visualizer and OpenCV.

The system has been developed for and tested in both ROS Hydro and Indigo (Ubuntu 12.04 and 14.04)

The driver has been improved to reach high performance, meaning to be able to process the sensor's information at full framerate (30Hz) on acceptable hardware (not only high-end machines). This was achieved through parallelization of the image pipeline. Care has also been taken to be able to transfer the complete data over compressed topics to other PCs (30Hz data uses approx. 40Mbytes/s on the network).

Specially interesting for other people with a PR2 robot: we have built a small mITX computer using an AMD A10-7850K processor, and a PicoPSU. It is installed as a backpack on our PR2, and a Kinect v2 on the head above the cameras. This 'backpack-PC' is necessary because the built-in computers on the PR2 don't support USB3 and they are quite loaded with their normal workload.

We are glad to announce the release of the software for ROS community, hoping it will be useful for others, specially people working in robotics research. Please see the following GitHub repository:

  https://github.com/code-iai/iai_kinect2

You will need a slightly patched version of libfreenect2, as indicated on the README. It is here:
  https://github.com/wiedemeyer/libfreenect2

Screenshots are also on the GitHub page.

We are looking forward to improvements and/or bug reports. Please use the GitHub tools for that.

Best regards,

Thiemo and Alexis

Institute for Artificial Intelligence
University of Bremen

New Package: libfreenect based Kinect driver

From Piyush via ROS Users

Hey folks,

After some initial discussion on the ROS mailing list [1], a
libfreenect (OpenKinect) based Kinect driver for ROS has been released
for Fuerte (freenect_stack) [2]. A system install for the stack is now
available. The stack is designed to have the same API as the OpenNI
one, and there is an easy migration guide [3]

The stack has the the following known limitations:
1) It only supports the Kinect [4]
2) It does not support USB 3.0 [5]. In contrast, OpenNI with a bit of
work can be made to work with USB 3.0 [6][7].

I'll continue to maintain the stack. My first priority will be to
include USB 3.0 compatibility, which is something I will work on as
time permits. Almost all high-end laptops these days only have USB 3.0
ports.

If you are facing problems with the stack, please report them on the
corresponding bug report page [8].

[1] http://comments.gmane.org/gmane.science.robotics.ros.user/16856
[2] http://www.ros.org/wiki/freenect_stack
[3] http://www.ros.org/wiki/freenect_camera?distro=fuerte#Migration_guide
[4] http://www.ros.org/wiki/freenect_camera?distro=fuerte#Other_OpenNI_devices
[5] https://github.com/piyushk/freenect_stack/issues/5
[6] http://answers.ros.org/question/9179/kinect-and-usb-30/
[7] http://answers.ros.org/question/33622/openni_launch-not-working-in-fuerte-ubuntu-precise-1204/
[8] https://github.com/piyushk/freenect_stack/issues

Thanks,
Piyush

RoboEarth ROS stack released

Announcement to ros-users by the RoboEarth team

Dear All,

We are happy to announce the RoboEarth ROS stack.

This stack currently allows you to create 3D object models and upload them to RoboEarth. It also provides packages to download any model stored in RoboEarth and detect the described object using a Kinect or an RGB-webcam.

The main packages are:

  • re_object_recorder: Allows you to create your own 3D object model using Microsoft's Kinect sensor. By recording and merging point clouds gathered from different angles around the object, a detailed model is created, which may be shared with the world by uploading it to RoboEarth.
  • re_kinect_object_detector: Allows you to detect objects you download from RoboEarth using a Kinect.
  • re_vision: Allows you to detect objects you download from RoboEarth using a common RGB camera (i.e., no Kinect is required for detection).

For more information, please see the http://www.ros.org/wiki/roboearth or our web site http://www.roboearth.org

Additionally, a demonstration video is available at: http://www.youtube.com/watch?v=5uMCa-dgtFE

We are looking forward to your feedback.

The RoboEarth Team info@roboearth.org

New Release of RGBDSlam

Announcement from Felix Endres from the University of Freiburg to ros-users

Dear ROS Users,

we are happy to announce a new release of our entry to the ROS-3D contest.

There have been many changes, we would like to share with the community:

  • Improvments w.r.t accuracy and robustness of registration
  • Performance improvments w.r.t computation time
  • A more convenient user interface with internal 3D visualization
  • Many convenience features, e.g., saving to pcd/ply file, node deletion etc

It is available for download.

Quick Installation (see README in svn repo for more detail) for Ubuntu:

$ sudo apt-get install ros-diamondback-desktop-full ros-diamondback-perception-pcl-addons ros-diamondback-openni-kinect meshlab
$ svn co http://alufr-ros-pkg.googlecode.com/svn/trunk/freiburg_tools/hogman_minimal
$ svn co https://svn.openslam.org/data/svn/rgbdslam/trunk rgbdslam
$ roscd rgbdslam && rosmake --rosdep-install rgbdslam

Best regards,
Felix


Creating 3D models of environments with a Kinect

The CCNY Robotics Lab was the first to bring us Kinect drivers for ROS, so it's not surprising that they have some awesome Kinect demos they have been working on.

In the above video, they show some of the latest results of their 6D pose estimation. Simply by moving the Kinect around an office, they are able to register multiple scans together and create a 3D model of the scene. Their code works with no extra sensors: they simply move around the Kinect freehand.

The work was done by Ivan Dryanovski, Bill Morris, Ravi Kaushik, and Dr. Jizhong Xiao. They are using custom RGB-D feature descriptors for the scan registration and use OpenCV, PCL, and ROS under the hood. They are working on releasing and documenting their code. In the meantime, you can checkout the rest of the cool software available in ccny-ros-pkg.

SLAM with Kinect on a Quadrotor

MIT's Robust Robotics Group, University of Washington, and Intel Labs Seattle teamed up to produce this demonstration of 3D map construction with a Kinect on a Quadrotor. Their demonstration combines onboard visual odometry for local control and offboard SLAM for map reconstruction. The visual odometry enables the quadrotor to navigate indoors where GPS is not available. SLAM is implemented using RGBD-SLAM.

More information

3D visual SLAM with mobile robots

A set of enterprising University of Waterloo undergrads have combined mobile robotics and 3D visual SLAM to produce 3D color maps. They mounted a Kinect 3D sensor on a Clearpath Husky A200 and used it to map cluttered industrial and office environment settings. The video shows off the impressive progress and capabilities of their "iC2020" module.

The iC2020 module was created by Sean Anderson, Kirk Mactavish, Daryl Tiong, and Aditya Sharma as part of their fourth-year design project at the University of Waterloo. They formed their group with the goal of using PrimeSense technology to create a globally consistent dense 3D color maps.

Under the hood they use ROS, OpenCV, GPUSURF, TORO to tackle the various challenges of motion estimation, mapping, and loop closure in noisy environments. Their software is capable of allowing real-time views of the 3D environment as it is created. ROS is supported out-of-the-box on the Clearpath Husky, and Sean Anderson noted that "ROS was crucial to the project's success" due to its ease of use and flexibility.

Their source code is available under a Creative Commons-NC-SA license at the ic2020 project on Google Code.

Implementation details:

  • Optical Flow using Shi Tomasi Corners
  • Visual Odometry using Shi Tomasi and GPU SURF
    • Features undergo RANSAC to find inliers (in green)
    • Least Squares is used across all inliers to solve for rotation and translation
  • Loop closure detection using a dynamic feature library
  • Global Network Optimization for loop closure

More information: iC 20/20

kinect_news.jpg

If you needed motivation or were waiting for the right time to upgrade to the new Diamondback release of ROS, this might be what you were looking for.

Announcement from Suat Gedikli of Willow Garage to ros-users

Hi everyone,

As part of our rewrite of the Kinect/PrimeSense drivers for ROS,
we're happy to announced that there are now debian packages
available for i386/amd64 on Ubuntu Lucid and Maverick. This
means that you can now:

sudo apt-get install ros-diamondback-openni-kinect

We hope this will simplify setting up your computer with the
Kinect. The documentation for the new openni_kinect stack can be
found here:

http://www.ros.org/wiki/openni_kinect

The new stack is compatible with the old 'ni' stack. We have
renamed the stack to provide clarify for new users and also to
maintain backwards compatibility with existing installations.

As an alternative, you can download, compile and install
these libraries from the sources, which are available here:

https://kforge.ros.org/openni/openni_ros

Other sample applications in the old "ni" stack are still
available there, but will be moved to other stacks in the near
future. The "ni" stack is deprecated and we encourage developers
wishing to use the latest updates to switch to the openni_kinect
stack.

FYI: developers on non-ROS platforms can find our scripts for
generating debian packages for OpenNI here:

https://kforge.ros.org/openni/drivers

Cheers,
Suat Gedikli
Tully Foote

3D Head Tracking tutorial

urdf_rviz.png

Patrick Goebel of PI Robot has put together an excellent tutorial on doing 3D head tracking with ROS. In Part 1 he covers configuring TFs, setting up the URDF model and configuration of Dynamixel AX-12+ servos for controlling the pan and tilt of a Kinect.

Besides more accurate depth estimation, one benefit of using a 3D sensor to perform head tracking is that it allows for the rejection of false positives by providing a means for the robot to distinguish between a person's head and a picture of a person's head.

You may also be interested in his previous tutorial on OpenCV Head Tracking.


Announcement from Patrick Goebel of Pi Robot to ros-users

Hello ROS users,

I have put together a little tutorial on using tf to point your robot's
head toward different locations relative to different frames of
reference. Eventually I'll get the tutorial onto the ROS wiki, but for
now it lives at:

http://www.pirobot.org/blog/0018/

The tutorial uses the ax12_controller_core package from the ua-ros-pkg
repository. Many thanks to Anton Rebguns for patiently helping me get
the launch files set up.

Please post any questions or bug reports to http://answers.ros.org or
email me directly.

Patrick Goebel
The Pi Robot Project
http://www.pirobot.org

Hands-free vacuuming by OTL

OTL has been a frequent contributor of great Roomba hacks, and this one is no exception. This time he's used a Kinect and a Roomba bluetooth connector to take back control of the vacuum. You can find out more in his blog post (Japanese). His blog is a great Japanese-language resource for getting into ROS.

See also:

Taylor Veltrop has made the first entry to our ROS 3D Contest. He uses the Kinect, and NITE to put a Kondo-style humanoid through pushups, waves, and other arm-control gestures. Great work! We look forward to seeing more entries.

Hi everyone!

Please take a look at my entry in the Kinect/RGB-D contest! I'm really happy with how it's turned out so far.

It's a small humanoid hobby robot by Kondo with a Roboard running ROS. The arms are controlled master/slave style over the network by a Kinect.

Entry: Humanoid Teleoperation

Taylor Veltrop

You can watch an interview with Taylor about this project over at Robot Dreams.

In the works: ScaViSLAM

For Kinect/OpenNI users and VSLAM researchers, we're working on integrating Hauke Strasdat's ScaViSLAM framework into ROS. ScaViSLAM is a a general and scalable framework for visual SLAM and should enable exciting applications like constructing 3D models of environments, creating 3D models of objects, augmented reality, and autonomous navigation.

We hope to release the ScaViSLAM library in Spring of 2011.

OpenNI Updates

openni.pngDevelopment on our OpenNI/ROS integration for the Kinect and PrimeSense Developers Kit 5.0 device continues as a fast pace. For those of you participating in the contest or otherwise hacking away, here's a summary of what's new. As always, contributions/patches are welcome.

Driver Updates: Bayer Images, New point cloud and resolution options via dynamic_reconfigure

Suat Gedikli, Patrick Mihelich, and Kurt Konolige have been working on the low-level drivers to expose more of the Kinect features. The low-level driver now has access to the Bayer pattern at 1280x1024 and we're working on "Fast" and "Best" (edge-aware) algorithms for de-bayering.

We've also integrated support for high-resolution images from avin's fork, and we've added options to downsample the image to lower resolutions (QVGA, QQVGA) for performance gains.

You can now select these resolutions, as well as different options for the point cloud that is generated (e.g. colored, unregistered) using dynamic_reconfigure.

Here are some basic (unscientific) performance stats on a 1.6Ghz i7 laptop:

  • point_cloud_type: XYZ+RGB, resolution: VGA (640x480), RGB image_resolution: SXGA (1280x1024)
    • XnSensor: 25%, openni_node: 60%
  • point_cloud_type: XYZ+RGB, resolution: VGA (640x480), RGB image_resolution: VGA (640x480)
    • XnSensor: 25%, openni_node: 60%
  • point_cloud_type: XYZ_registered, resolution: VGA (640x480), RGB image_resolution: VGA (640x480)
    • XnSensor: 20%, openni_node: 30%
  • point_cloud_type: XYZ_unregistered, resolution: VGA (640x480), RGB image_resolution: VGA (640x480):
    • XnSensor: 8%, openni_node: 30%
  • point_cloud_type: XYZ_unregistered, resolution: QVGA (320x240)
    • XnSensor: 8%, openni_node: 10%
  • point_cloud_type: XYZ_unregistered, resolution: QQVGA (160x120)
    • XnSensor: 8%, openni_node: 5%
  • No client connected (all cases)
    • XnSensor: 0%, openni_node: 0%

NITE Updates: OpenNI Tracker, 32-bit support in ROS

Thanks to Kei Okada and the Tokyo University JSK Lab, the Makefile for the NITE ROS package properly detects your architecture (32-bit vs. 64-bit) and downloads the correct binary.

Tim Field put together a ROS/NITE sample called openni_tracker for those of you wishing to:

  1. Figure out how to compile OpenNI/NITE code in ROS
  2. Export the skeleton tracking as TF coordinate frames.

The sample is a work in progress, but hopefully it will give you all a head start.

Point Cloud → Laser Scan

Tully Foote and Melonee Wise have written a pointcloud_to_laserscan package that converts the 3D data into a 2D 'laser scan'. This is useful for using the Kinect with algorithms that require laser scan data, like laser-based SLAM.

OpenNI PCL

Radu Rusu is working on an openni_pcl package that will allow you to better use the Point Cloud Library with the OpenNI driver. This package currently contains a point cloud viewer as well as a nodelet-based launch files for creating a voxelgrid. More launch files are on the way.

New tf frames

There are new tf frames that you can use, which simplifies interaction in rviz (for those not used to Z-forward). The new frames also bring the driver in conformance with REP 103.

These frames are: /openni_camera, /openni_rgb_frame, /openni_rgb_optical_frame (Z forward), /openni_depth_frame, /openni_depth_optical_frame. (Z forward). For more info, see Tully's ros-kinect post.

Roadmap

We're getting close to the point where we will be breaking the ni stack up into smaller pieces. This will keep the main driver lightweight, while still enabling libraries to be integrated on top. We will also be folding more of PCL's capabilities soon.

Kinect-based Person Follower

Garratt Gallagher from CSAIL/MIT is at it again. Above, you can see his work on using the new OpenNI-based ROS drivers to get an iRobot Create to follow a person around. This code is based off of the skeleton tracker that comes with the NITE library.

For those of you figuring out how to get the NITE tracking data into ROS, take a look at Garratt's nifun package.

openni.pngThis morning OpenNI was launched, complete with open source drivers for the PrimeSense sensor. We are happy to now announce that we have completed our first integrate with OpenNI, which will enable users to get point cloud data from a PrimeSense device into ROS. We have also enabled support for the Kinect sensor with OpenNI.

This new code is available in the ni stack. We have low-level, preliminary integration of the NITE skeleton and hand-point gesture library. In the coming days we hope to have the data accessible in ROS as well.

For more information, please see ros.org/wiki/ni.

NITE-1.png

The RGB-D project, a joint research effort between Intel Labs Seattle and the University of Washington Department of Computer Science & Engineering, has lots of demo videos on their site showing the various ways in which they have been using the PrimeSense RGB-D sensors in their work. These demos include 3D modeling of indoor environments, object recognition, object modeling, and gesture-based interactions.

In the video above, the "Gambit" chess-playing robot uses the RGB-D sensor to monitor a physical chessboard and play against a human opponent. And yes, that is the ROS rviz visualizer in the background.

More Videos/RGB-D Project Page

"Minority Report" interface using Kinect, ROS, and PCL

Garratt Gallagher from CSAIL/MIT has followed up his Kinect piano and hand detection hacks with a full "Minority Report" interface. The demo builds on the pcl library to do hand detection. You'll find Garratt's open-source libraries for building your own interface in mit-ros-pkg.

Update: MIT News release with more details

openni.pngPrimeSenseâ„¢ is launching the OpenNIâ„¢ organization, an open effort to help foster "Natural Interaction"â„¢ applications. As part of this effort, PrimeSense is releasing open source drivers for the RGB-D sensor that powers the Kinectâ„¢ and other devices such as PrimeSense's Development Kit 5.0 (PSDK 5.0) and are making the HW available for the OpenNI developers community! This will unlock full support for their sensor and also provide a commercially supported implementation. They are also releasing an open-source OpenNI API, which provides a common middleware for applications to access RGB-D sensors. Finally, they are releasing Windows and Linux binaries for the NITE skeleton-tracking library, which will enable developers to use OpenNI to create gesture and other natural-interaction applications. We at Willow Garage have been working with PrimeSense to help launch the open-source drivers and are happy to join PrimeSense in leading the OpenNI organization.

PrimeSense's RGB-D sensor is the start of a bright future of mass-market available 3D sensors for robotics and other applications. The OpenNI organization will foster and accelerate the use of 3D perception for human-computer/robot interaction, as well as help future sensors, libraries, and applications remain compatible as these technologies rapidly evolve.

For the past several weeks, we've been working with members of the libfreenect/OpenKinect community to provide open-source drivers, and we have already begun work to quickly integrate PrimeSense's contributions with these efforts. We will be using the full sensor API to provide better data for computer vision libraries, such as access to the factory calibration and image registration. We are also working on wrapping the NITEâ„¢ skeleton and handpoint tracking libraries into ROS. Having access to skeleton tracking will bring about "Minority Report" interfaces even faster. The common OpenNI APIs will also help the open-source community easily exchange libraries and applications that build on top. We've already seen many great RGB-D hacks -- we can't wait to see what will happen with the full power of the sensor and community unleashed.

This release was made possible by the many efforts of the open-source community. PrimeSense was originally planning on releasing these open-source drivers later, but the huge open-source Kinect community convinced them to accelerate their efforts and release now. They will be doing a more "formal" release in early 2011, but this initial access should give the developer community many new capabilities to play with over the holidays. As this is an early "alpha" release, we are still integrating the full capabilities and the ROS documentation is still being prepared. Stay tuned for some follow-up posts on how to start using these drivers and NITE with ROS.

PrimeSense's PSDK 5.0 is available separately and has several advantages for robotics: it is powered solely by USB, and the sensor package is smaller and lighter than the Kinect. This simplifies integration and will be important for use in smaller robots like quadrotors. PrimeSense is making a limited number of PrimeSense developer kits available for purchase. Please visit here to sign up to purchase the PSDK5.0.

You can visit OpenNI.org to find out more about the OpenNI organization and get binaries builds of these releases. Developers interested in working with the source code can checkout the repositories on GitHub and join the discussion groups at at groups.google.com/group/openni-dev. For more information about OpenNI, please visit OpenNI.org. To follow the efforts of the ROS community and Kinect, please join the ros-kinect mailing list.

Technical information on Kinect calibration

pattern

Kurt Konolige and Patrick Mihelich have prepared a technical overview of the Kinect calibration provided in the kinect_calibration package for ROS. For those of you wishing to understand the technology behind the PrimeSense sensor, this provides a detailed overview of how depth is calibration -- and how we go about providing the calibration necessary for perception algorithms.

2  

Find this blog and more at planet.ros.org.


Monthly Archives

Find recent content on the main index or look in the archives to find all content.