PointClouds.org: A new home for Point Cloud Library (PCL)

| No Comments | No TrackBacks

Announcement from Radu Rusu/Willow Garage

The Point Cloud Library (PCL) moved today to its new home at PointClouds.org. Now that quality 3D point cloud sensors like the Kinect are cheaply available, the need for a stable 3D point cloud-processing library is greater than ever before. This new site provides a home for the exploding PCL developer community that is creating novel applications with these sensors.

PCL contains numerous state-of-the art algorithms for 3D point cloud processing, including filtering, feature estimation, surface reconstruction, registration, model fitting and segmentation. These algorithms can be used, for example, to filter outliers from noisy data, stitch 3D point clouds together, segment relevant parts of a scene, extract keypoints and compute descriptors to recognize objects in the world based on their geometric appearance, and create surfaces from point clouds and visualize them -- to name a few.

First Anniversary: a brief history of PCL

This new site also celebrate the one year anniversary of PCL. The official development for PCL started in March 2010 at Willow Garage. Our goal was to create a library that can support the type of 3D point cloud algorithms that mobile manipulation and personal robotics need, and try to combine years of experience in the field into coherent framework. PCL's grandfather, Point Cloud Mapping, was developed just a few months earlier, and it served as an important building block in Willow Garage's Milestone 2. Based on these experiences, PCL was launched to bring world-class research in 3D perception together into a single software library. PCL would enable developers to harness the potential of the quickly growing 3D sensor market for robotics and other industries.

For this occasion, we put together a video that present the development of PCL over time.

Towards 1.0: PCL and Kinect

The launch of the Kinect sensor in November 2010 turned many eyes on PCL, and its user community quickly multiplied. We turned our focus on stabilizing and improving the usability of PCL so that users would be able to develop applications on top. We are now proud to announce that the upcoming release of PCL features a complete Kinect (OpenNI) camera grabber, which allows users to get data directly in PCL and operate on it. PCL has already been used by many of the entries in the ROS 3D contest, showing the potential of Kinect and ROS. Please check our website for tutorials on how to visualize and integrate Kinect data directly in your application.

The PCL development team is current working hard towards a 1.0 release. PCL 1.0 will focus on modularity and enable deployment of PCL on different computational devices.

A Growing Community

We are proud to be part of an extremely active community. Our development team spawns over three continents and five countries, and it includes prestigious engineers and scientists from institutions such as: AIST, University of California Berkeley, University of Bonn, University of British Columbia, ETH Zurich, University of Freiburg, Intel Research Seattle, LAAS/CNRS, MIT, NVidia, University of Osnabrück, Stanford University, University of Tokyo, TUM, Vienna University of Technology, Willow Garage, and Washington University in St. Louis.


PCL is proudly supported by Willow Garage, NVidia, and Google Summer of Code 2011. For more information please check http://www.pointclouds.org/about.html.


PCL wouldn't have become what it is today without the help of many people. Thank you to our tremendous community, especially our contributors and developers who have worked so hard to make PCL more stable, more user friendly, and better documented. We hope that PCL will help you solve more 3D perception problems, and we look forward to your contributions!

No TrackBacks

TrackBack URL: http://www.ros.org/mt-tb.cgi/360

Leave a comment

About this Entry

This page contains a single entry by kwc published on March 28, 2011 11:07 AM.

ROS Documentation Contest was the previous entry in this blog.

3D visual SLAM with mobile robots is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Please submit content to be reviewed by emailing ros-news@googlegroups.com.