Visual SLAM for ROS

| No Comments | No TrackBacks

crossposted from

Helen Oleynikova, a student at Olin College of Engineering, spent her summer internship at Willow Garage working on improving visual SLAM libraries and and integrating them with ROS. Visual SLAM is a useful building block in robotics with several applications, such as localizing a robot and creating 3D reconstructions of an environment.

Visual SLAM uses camera images to map out the position of a robot in a new environment. It works by tracking image features between camera frames, and determining the robot's pose and the position of those features in the world based on their relative movement. This tracking provides an additional odometry source, which is useful for determining the location of a robot. As camera sensors are generally cheaper than laser sensors, this can be a useful alternative or complement to laser-based localization methods.

In addition to improving the VSLAM libraries, Helen put together documentation and tutorials to help you integrate Visual SLAM with your robot. VSLAM can be used with ROS C Turtle, though it requires extra modifications to install as it requires libraries that are being developed for ROS Diamondback.

To find out more, check out the vslam stack on For detailed technical information, you can also check out Helen's presentation slides below.

No TrackBacks

TrackBack URL:

Leave a comment

Find this blog and more at

Monthly Archives

About this Entry

This page contains a single entry by kwc published on September 15, 2010 12:04 AM.

NDI Polaris driver in kul-ros-pkg was the previous entry in this blog.

First development release of orocos_toolchain_ros: v0.1.0 is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.