Recently in packages Category

Actin-ROS Interface

From Neil Tardella

Actin is a powerful commercial control and simulation framework used in several industrial and government robotic systems. Energid, the developer of Actin, is now providing a ROS Kinetic stack and a ROS plugin base class for Actin that supports Windows, Mac OS X, and Linux. Actin now also includes URDF reader support in Linux builds.

The open source ActinROS code is available on Github at the following link:

https://github.com/Energid/ActinROS

The repository includes plugins and example applications for using Actin with ROS. A lightweight version of Actin ships with Robai Cyton robots.

ROS binary logger package

From Enrico Villagrossi

We would like to announce the release of the new ROS binary logger package. The package is designed to be an alternative to rosbag when:

  1. multiple and long messages acquisitions are required (the binary files have smaller dimensions)
  2. only the offline data analysis is required and no replay of the experiment is necessary in ROS (e.g. data analysis with MATLAB)

The usage of binary files allows to reduce the dimensions of the log files and allows to speed up the post processing of such files (e.g MATLAB spend ~0.1s to unpack 300MB of binary file). The package allows to record some common ROS message such as: sensormsgs/Imu, sensormsgs/JointState, geometry_msgs/WrenchStamped, etc... New message types can be easily added and the users are encouraged to contribute. Two MATLAB scripts are also provided to unpack the binary file.

You can find the code here: https://github.com/CNR-ITIA-IRAS/binary_logger More information and a short description can be found in the repository.

Contacts: Manuel Beschi manuel.beschi@itia.cnr.it - Enrico Villagrossi enrico.villagrossi@itia.cnr.it

Announcing the release v0.6 of RAPP Platform and RAPP API

From Manos Tsardoulias

We are happy to announce the v0.6 version of RAPP Platform and RAPP API, oriented to provide an online platform for delivering ready-to-use generic cloud services to robots!

The changes in comparison to v0.5.5 follow:

  • RAPP Platform Web services now support authentication via a tokens mechanism

  • Several new functionalities were introduced in the form of ROS nodes along with the respectful API web calls. These include object recognition via a Caffe wrapper (http://caffe.berkeleyvision.org/), e-mail management, geolocator, hazard detection in a household environment (detects if lights were left on or doors open), human detection, news explorer, path planning and a weather reporter.

  • Web services: Introduced a framework developed on-top of hop.js for easily implementing Web Services (documenation)

  • Python Platform API was refactored, supporting high level API and advanced API implementations, as well as static request and response objects.

  • RAPP Platform Wiki has been updated with the current description of all nodes, including full tutorials on how to create a new functionality, a new web service or even robotic applications.

  • RAPP Platform scripts (installation and deployment) were transferred in a separate repository

You can download a ready-to-launch VM containing the RAPP Platform v0.6 from here. Furthermore RAPP Platform v0.6 is already publicly launched in the Aristotle University of Thessaloniki premises. You can find more information on how to invoke its cloud services here.

Links of interest:

RAPP is a 3-year research project (2013-2016) funded by the European Commission through its FP7 programme, which provides an open source software platform to support the creation and delivery of robotic applications. Its technical objectives include the development of an infrastructure for developers of robotic applications, so they can easily build and include machine learning and personalization techniques to their applications, the creation of a repository from which robots can download Robotic Applications (RApps) and upload useful monitoring information, as well as developing a methodology for knowledge representation and reasoning in robotics and automation. More information on RAPP can be found at http://rapp-project.eu/.

New Package PlotJuggler

From Davide Faconti

I would like to announce PlotJuggler, a Qt based application that allows the user to load, search and plot data. Many ROS user would use MATLAB or rqt_plot for this purpose, but these solutions might be frustrating to use when the data to be analyzed is considerably large.

PlotJuggler is meant to be a better alternative to rqtplot and rqtbag, providing a more user friendly interface.

Features:

  • Multiplot: add multiple curves to a plot. Arrange plots in rows, columns, tabs and/or separate windows.

  • Zoom: easily zoom a plot. You can lock the X axis of all of the plots.

  • Save/Load layouts: one you have organized your layout, you can save it on a file to be reused later.

  • Complete Undo/Redo: CTRL-Z does what you would expect it to do.

  • DataLoad plugins: easily load CSV or rosbags.

  • DataStreaming plugins: subscribe to one or multiple ros topic(s) and plot their data live.

  • RosPublisher plugin: re-publish the original ROS messages using the interactive tracker.

You can get a first impression of how PlotJuggler works here

PlotJuggler: a desktop application to plot time series. from Davide Faconti on Vimeo.

PlotJuggler: live streaming of a ROS Topic from Davide Faconti on Vimeo.

PlotJuggler: loading and re-publishing messages from ROS bags from Davide Faconti on Vimeo.

PlotJuggler is still in its "alpha" stage and under heavy development. I would like to get some feedback from the community to understand how this tool need to evolve.

You can find the code here: https://github.com/facontidavide/PlotJuggler

NOTE: you will also need this package too https://github.com/facontidavide/rostypeintrospection

Introducing Cartographer

From Damon Kohler, Wolfgang Hess, and Holger Rapp, Google Engineering

We are happy to announce the open source release of Cartographer, a real-time SLAM library in 2D and 3D with ROS support.

Cartographer builds globally consistent maps in real-time across a broad range of sensor configurations common in academia and industry. The following video is a demonstration Cartographer's real-time loop closure:


A detailed description of Cartographer's 2D algorithms can be found in our ICRA 2016 paper.

Thanks to ROS integration and support from external contributors, Cartographer is ready to use on several robot platforms with ROS support:

At Google, Cartographer has enabled a range of applications from mapping museums and transit hubs to enabling new visualizations of famous buildings.

We recognize the value of high quality datasets to the research community. That's why, thanks to cooperation with the Deutsches Museum (the largest tech museum in the world), we are also releasing three years of LIDAR and IMU data collected using our 2D and 3D mapping backpack platforms during the development and testing of Cartographer.


Our focus is on advancing and democratizing SLAM as a technology. Currently, Cartographer is heavily focused on LIDAR SLAM. Through continued development and community contributions, we hope to add both support for more sensors and platforms as well as new features, such as lifelong mapping and localizing in a pre-existing map.

Grid Map Library

From Péter Fankhauser via ros-users@:

We'd like to announce our new Grid Map package, developed to manage two-dimensional grid maps with multiple data layers and designed for mobile robotic mapping in rough terrain navigation.

The package is available for ROS Indigo, Jade, and Kinetic and can be installed from the ROS PPA. After multiple development cycles and use in many projects, the library is well tested and stable.

Features:

  • Multi-layered: Developed for universal 2.5-dimensional grid mapping with support for any number of layers.

  • Efficient map re-positioning: Data storage is implemented as two-dimensional circular buffer. This allows for non-destructive shifting of the map's position (e.g. to follow the robot) without copying data in memory.

  • Based on Eigen: Grid map data is stored as Eigen data types. Users can apply available Eigen algorithms directly to the mapdata for versatile and efficient data manipulation.

  • Convenience functions: Several helper methods allow for convenient and memory safe cell data access. For example, iterator functions for rectangular, circular, polygonal regions and lines are implemented.

  • ROS interface: Grid maps can be directly converted to and from ROS message types such as PointCloud2, OccupancyGrid, GridCells, and our custom GridMap message.

  • OpenCV interface: Grid maps can be seamlessly converted from and to OpenCV image types to make use of the tools provided by OpenCV.

  • Visualizations: The grid_map_rviz_plugin renders grid maps as 3d surface plots (height maps) in RViz. Additionally, the grid_map_visualization package helps to visualize grid maps as point clouds, occupancy grids, grid cells etc.

Source code, documentation, and tutorials available at https://github.com/ethz-asl/grid_map

From Weijia Yao via ros-users@

I am a member of NuBot team, a RoboCup Middle Size League, participating team. We have built a simulation system based on ROS ang Gazebo to research into multi-robot cooperation strategies. Although it mainly focuses on soccer robots, it could be modified for other purposes as well. If you are interested, please check out this repository: single_nubot_gazebo. There is a simulation competition based on this, check out simatch.

New Package: rosparam_handler package

From Claudio Bandera via ros-users@

so I was very frustrated with how I had to define parameters for my nodes in several places. The declaration, the call to getParam and then everything again in a second place when I wanted to have a parameter that is configurable through dynamic reconfigure. Furthermore, you had to make sure the redundant parameters lived in the same namespace, otherwise you would run into serious trouble... This made it quite hard and error prone to add or refactor parameters later.

To solve this problem, I have created the rosparam_handler package. It is inspired by the cfg files and code generation provided by dynamic_reconfigure, but extends the functionality greatly.

The rosparam_handler let's you:

  • specify all of your parameters in a single file
  • use a generated struct to hold your parameters
  • use a member method for grabbing the parameters from the parameter server
  • use a member method for updating them from dynamic_reconfigure.
  • make your parameters configurable with a single flag.
  • set default, min and max values
  • choose between global and private namespace
  • save a lot of time on specifying your parameters in several places.

If this sounds interesting to you, have a look at the README, Tutorials and the source code at https://github.com/cbandera/rosparam_handler

Please let me know if you have any feedback, suggestions or any trouble using the package.

From Limor Schweitzer and his team at RoboSavvy:


Small step for Virtual Reality (VR), big step for autonomous robots. One of the key issues with autonomous robot applications is indoor localization. HTC Vive has singlehandedly solved this age-long problem.

This 800$ system (will go down to 200$ in a few months once lighthouses and base stations are available without the headset in addition to minuscule lighthouse sensors) is comparable to a 150 000$ Ir marker multi-camera system. The Vive gives you 60fps, 0.3mm resolution, across any size internal volume (currently a 5m cube box but will be extendable) So unless you are doing indoor 3D drones, you don't need more than 60Hz and a camera system will give ~cm resolution. No other indoor localization system can get anywhere close to the Vive specs.

Initially the idea was to just use this to calibrate our robot's odometry/localization mechanisms (visual, wheels, LIDAR, IMU) However, there was this unexpected turn of events the past month whereby Valve is opening up the technology for non-VR applications so it may actually be possible to rely on this for real indoor applications and use the other forms of localization as backup.

We ended up integrating the Vive API for tracking the handheld devices with ROS. This provides ROS robots with the most precise absolute indoor localization reference. Source code is available at:

https://github.com/robosavvy/vive_ros

From Yanzhen Wang via ros-users@

This is an announcement for microsswarmframework, developed by Xuefeng Chang in our group (the micROS Team, https://micros.trustie.net). microsswarmframework is a ROS-based programming framework for swarm robotics. It is motivated by the rapidly increasing volume of research effort devoted into multi-robot systems and swarm robotics, and the design choice of API is largely enlightened by the Buzz programming language http://the.swarming.buzz/. Its goal is to facilitate ROS users in developing applications of robot swarms, by providing essential mechanisms, such as abstraction of swarms, swarm management, various communication tools, and a runtime environment, within the standard ROS ecosystem.

Currently, it is completely compatible with ROS indigo and presented in the form of a C++ library. Many additional features will be added in the future to make the framework more user-friendly and powerful.

Documentation can be found on ROS Wiki: https://wiki.ros.org/micros_swarm_framework. Source code for the framework and demos in the Stage simulator can be found on GitHub: https://github.com/xuefengchang/micros_swarm_framework.

Hope you enjoy! Comments and suggestions would be highly appreciated.

From Philipp Schillinger via ros-users@

I created a new rqt plugin for launch files which might be of interest for some of you: rqt_launchtree

It lets you navigate through the hierarchy of included launch files, shows entries such as nodes, params, or arguments and has a keyword search throughout the hierarchy. Furthermore, you can directly open any included file for editing.

In contrast to rqt_launch, it is not meant to execute any nodes. Instead, the focus is on the hierarchy and providing an overview of the system configuration described by a root launch file. You won't have to open launch files just to check what they include anymore and you will find the file you are looking for much easier.

It is available on github: https://github.com/pschillinger/rqt_launchtree

You can find more detailed and further information in the wiki. Please let me know if you miss some information. http://wiki.ros.org/rqt_launchtree

New Package: Bag Database

From P. J. Reed

As hinted yesterday, the real project I have to announce is the Bag Database. This is a server that will scan and monitor an arbitrary directory of bag files, index them, and provide a web-based interface that can be used to quickly search through, analyze, and download them. Have you ever wondered, "Do we have any bags that have a TexturedMarker message in them?", or "What did the path this vehicle followed look like on a satellite terrain map?" This will help you answer both of those questions.

It was designed primarily for internal use; our team has a NAS on which we store thousands of bag files, many of which are several GB in size, and searching through them by hand was difficult and time-consuming. With the Bag Database, everybody can still use tools such as Samba or SFTP to put their bags on the NAS, and the Bag DB will automatically analyze them and make them available in its UI. We have about 15 TB of bags, and it takes about half a second to search through all of them for arbitrary message types or topics.

Features:

  • Display any of the information about a bag normally obtained through "rosbag info"
  • Quickly search for bags based on their filename, location, contained message types, or published topics
  • Filter the visible list of bags based on start and end times, latitude/longitude, size, and more
  • Store user-entered metadata such as the vehicle name or description
  • Display the bag's path of GPS coordinates on a MapQuest or Bing map
  • Use Google's reverse-geocoding API to get a string describing a bag's location from its lat/lon coordinates
  • Identify duplicate bag files and tell you about them (although this UI could be better...)

The Bag Database is a Java servlet that only needs a PostgreSQL database to be useful, and it's easiest to deploy it as a Docker container. Source code, documentation, and installation instructions are all available on GitHub: https://github.com/swri-robotics/bag-database

I know it still has a few rough spots I can work out, but I thought now was a good time to go ahead and release it and see how much interest there is.

Feel free to submit any issues or feature requests on GitHub, and let me know if you have any other questions about it.

slack-ros-pkg: Let your robot chat with you !

From Joffrey Kriegel

I recently made a package to enable the communication between ROS and Slack. Slack is a messaging app for team with multiplatform capability.

This package is able to connect to a Slack channel, listen what you say in it and publish it in a ROS topic. It's also able to write on the Slack channel thanks to another ROS topic.

You can find the source code (in python) and the (little) documentation here : https://github.com/smart-robotics-team/slack-ros-pkg

I hope you will enjoy this package.

New node: face_detection_tracking

| No Comments | No TrackBacks
From Philippe Ludivig via ros-users@


I build a face detection and tracking system, which I would like to add to the repository.

A more detailed explanation and some example videos can be found here:
http://www.phil.lu/?page_id=328

The code can be found on github:
https://github.com/phil333/face_detection

I have added some documentation here:
http://wiki.ros.org/face_detection_tracking#preview



As a side note, I initially tried the ROS package proposal process:
http://wiki.ros.org/PackageProposalProcess
I am not sure if this documentation is still up to date, but since nobody responded, i guess it should be corrected/removed.

New package: joystick_sdl

| No Comments | No TrackBacks
From Mike Purvis via ros-users@

A small Christmas present to share, especially for non-Ubuntu ROS users:


For myself as a Mac user, it's long been a thorn in my side that I'm unable to plug in a joystick to drive around real and simulated robots-- when rviz, Gazebo, rqt, and everything else in ROS runs under OS X, why do I have to start a VM just to do a little teleoperation?

Please give it a try and let me know your thoughts,
From Victor Mayoral Vilches via ros-users@


For the last months we provided several training sessions on how to use our brains and robots based in ROS. While doing so we noticed that many people struggled at understanding ROS so we started exploring a way to make this process easier.

We prototyped different concepts and decided that ideally wanted to reach high schools students. At this point we removed the assumption of "coding skills" from the equation which made us look into systems like Scratch for robot programming. After taking inspiration from previous work we are happy to present robot_blocky: a multiplatform, web-based tool for programming robots and drones that use ROS. 

Here's a short clip that introduces robot_blocky: Previously called ROSimple


We like to think of robot_blocky as a simple way to program robots using ROS. Code is available at https://github.com/erlerobot/robot_blockly and there's a first iteration of documentation at http://wiki.ros.org/blockly
From Paul Bouchier via ros-users@

I'm pleased to announce the release of ROS documentation and source code for the swiftnav_piksi package. 

This package is a ROS release of a driver for Swift Navigation's Piksi RTK GPS receiver module. A pair of Piksi modules connected by a wireless link provides the location of each receiver relative to the other with accuracy as good as a couple of centimetres when there's a clear view of the sky. In addition, each Piksi module provides its location with typical GPS accuracy (about 3 meters). Often, one module is a stationary base station, which may optionally be located at a surveyed point, while the other is mounted on a rover and provides ROS navigation software with a highly accurate position relative to the base station.

RTK GPS has been around for a long time, however the Piksi devices hit a new low price point at under $1000/pair.

The documentation is at http://wiki.ros.org/swiftnav_piksi, and source and the standard github bug lists etc are athttps://github.com/PaulBouchier/swiftnav_piksi.

Introducing A Better Inverse Kinematics Package

| No Comments | No TrackBacks
From Patrick Beeson via ros-users@

TRACLabs Inc. is glad to announce the public release of our Inverse Kinematics solver TRAC-IK.  TRAC-IK is a faster, significantly more reliable drop-in replacement for KDL's pseudoinverse Jacobian solver.

Source (including a MoveIt! plugin) can be found at:
https://bitbucket.org/traclabs/trac_ik.git

TRAC-IK has a very similar API to KDL's IK solver calls, except that the user passes a maximum time instead of a maximum number of search iterations.  Additionally, TRAC-IK allows for error tolerances to be set independently for each Cartesian dimension (x,y,z,roll,pitch.yaw).

More details:

KDL's joint-limited pseudoinverse Jacobian implementation is the solver used by various ROS packages and MoveIt! for generic manipulation chains.  In our research with Atlas humanoids in the DARPA Robotics Challenge and with NASA's Robotnaut-2 and Valkyrie humanoids, TRACLabs researchers experienced a high amount of solve errors when using KDL's inverse kinematics functions on robotic arms.  We tracked the issues down to the fact that theoretically-sound Newton methods fail in the face of joint limits.  As such, we have created TRAC-IK that concurrently runs two different IK methods: 1) an enhancment of KDL's solver (which detects and mitigates local minima that can occur when joint limits are encountered during gradient descent) and 2) a Sequential Quadratic Programming IK formulation that uses quasi-Newton methods that are known to better handle non-smooth search spaces.  The results have been very positive.  By combing the two approaches together, TRAC-IK outperforms both standalone IK methods with no additional overhead in runtime for small chains, and significant improvements in time for large chains.

Details can be found here in our Humanoids 2015 paper here:
https://personal.traclabs.com/~pbeeson/publications/b2hd-Beeson-humanoids-15.html

A few high-level results are shown in the attached (low-res) figure.

tracik_results.png

Perception Neuron Motion Capture available in ROS

| No Comments | No TrackBacks
From Alexander Rietzler and Simon Haller

We are pleased to announce a new package for making the Biovision
Hierarchy (BVH) data generated by the Perception Neuron motion capture
system [1] available under ROS in Linux.

The software perception-neuron-ros [2] contains 2 packages:
-A ROS Serial package under Windows reads the BVH data and sends it to
the ROS Server
-A ROS package reads the BVH data and broadcasts the frames to TF

[1] https://neuronmocap.com/ (currently pre-orderable)
[2] https://github.com/smhaller/perception-neuron-ros

From David Fischinger via ros-users@

We are pleased to announce a new package for grasp calculation on unknown and known objects.

This package receives a point cloud representing objects and identifies where to best place the gripper. The algorithm does not require segmentation or a-priori knowledge about the objects. It has already been employed on various platforms, including a PR2 [4], a Kuka LWR [5], a Schunk arm [6] and the service robot Hobbit [7].

More details, a scientific foundation and evaluation results can be found in an IJRR journal publication from August 2015 [1], a more technical description and a simple getting started guide can be found at [2]. Code is available on GitHub [3].

Currently Indigo is supported.

Links:

  1. http://ijr.sagepub.com/content/34/9/1167.full.pdf+html ? IJRR publication 2015 or http://users.acin.tuwien.ac.at/dfischinger/files/IJRR_FinalRevision.pdf (final revision)
  2. http://wiki.ros.org/haf_grasping ? Technical description, Getting started
  3. https://github.com/davidfischinger/haf_grasping ? Code on GitHub

Videos:

PR2, unknown objects

Kuka arm, known object

Schunk arm, unknown objects in box

Service robot Hobbit

Announcing Mapviz, a ROS Visualization Tool

| No Comments | No TrackBacks
From Edward Venator via ros-users@

Southwest Research Institute (SwRI) is pleased to announce the release of Mapviz, a graphical tool for viewing ROS data from outdoor robotic systems. Mapviz, like Rviz, uses an extensible plugin architecture to display ROS data. The Intelligent Vehicles sections at SwRI developed Mapviz as a tool for our work using ROS in automotive applications. Whereas Rviz is designed for 3D display of data from indoor robots, Mapviz is a 2D (top-down) viewer designed for use with outdoor robots. We look forward to feedback and contributions from others who find Mapviz useful.

Some Features of Mapviz:

* Tile Map background maps from OpenMapQuest (open.mapquest.com) and stamen design (http://maps.stamen.com), with Bing map support coming soon
* Multires Image backgrounds to display custom maps or backgrounds
* Robot marker(s) with custom robot image icons
* Plugins for several ROS message types:
    - sensor_msgs/Disparity
    - gps_common/GPSFix (sensor_msgs/NavSatFix support coming soon)
    - sensor_msgs/Image
    - sensor_msgs/LaserScan
    - visualization_msgs/Marker
    - nav_msgs/Odometry
    - nav_msgs/Path
    - marti_visualization_msgs/Textured Marker (a custom message type for painting image data onto the visualization)
    - TF transforms

Jerry Towler of Southwest Research Institute will give a presentation on Mapviz during Day 2 of ROSCon 2015.

If you want to use Mapviz, it is available on GitHub at http://github.com/swri-robotics/mapviz. Ubuntu Debian installers from the OSRF build farm are up in the shadow-fixed repository for Indigo (Trusty) and Jade (Trusty, Utopic, and Vivid). Additionally, Mapviz can be compiled from source for ROS Jade, Indigo, Hydro, Groovy, and Fuerte. Documentation is available on the ROS wiki at http://wiki.ros.org/mapviz

New Universal Robot driver

| No Comments | No TrackBacks
From Thomas Timm Andersen:

I am glad to share my new driver for the Universal Robots: https://github.com/ThomasTimm/ur_modern_driver

 

The driver is written in c++ and is designed to replace the old driver transparently, while solving some issues, improving usability as well as enabling compatibility of ros_control.

 

The most noticeable differences for current users will be the ability to use the teach pendant while the driver is connected as well as support for the UR3 and newest firmware versions.

The driver also makes it possible to send URScript commands from ROS to the robot as well as introduces a joint speed interface for doing visual servoing with the UR robots.

 

The driver resides in its own package, so you can install it and test it out without risking your current setup. Just use the included launch files instead of the ones from urX_bringup.

 

I have also included support for ros_control for those who would like to test that out. Note that the use of ros_control will probably require some minor modifications to your existing code if you choose to incorporate this into an existing project. Whether the driver should expose the "old" action interface or be controlled via ros_control is determined by a parameter at launch time. Note that the PID controllers are not optimally tuned at this time.

 

I have tested the driver with all the newest versions of ur-sim (1.6.08725, 1.7.10857, 1.8.16941, 3.0.16471, and 3.1.18024) as well as some real robots (UR5 and UR10, both with a CB2 controller running 1.8.14035).

 

Please try it out and report any issues / incompatibilities so it hopefully can make it into Jade (or indigo?)

nimbro_network: Multi-master ROS network solution

| No Comments | No TrackBacks
From Max Schwarz via ros-users#

our group has developed a network transport solution for multi-master ROS
systems. We used it with great success in the DLR SpaceBot-Cup and the DARPA
Robotics Challenge, where our team (NimbRo Rescue) got fourth place.

Opposed to other multi-master solutions, our software is targeted for *bad*
networks, such as WiFi connections. For example, it can handle large latencies
and large packet-drop ratios without introducing further latency or dropping
messages.

The stack is now available under BSD-3 license here:

https://github.com/AIS-Bonn/nimbro_network

Some features:

 * Topic transport:
   * TCP protocol for transmission guarantee
   * UDP protocol for streaming data without transmission guarantee
   * Optional transparent BZip2 compression using libbz2
   * Experimental Forward Error Correction (FEC) for the UDP transport
   * Automatic topic discovery on the receiver side. The transmitter defines
      which topics get transferred
   * Optional rate-limiting for each topic
 * Service transport:
   * TCP protocol with minimal latency (support for TCP Fast-Open is included)
   * UDP protocol with minimum latency
 * Additional nodes/filters for transmitting the ROS log, TF tree and
   H.264-compressed camera images.
 * rqt plugins for visualization and debugging of network issues

For more details, see the included README file. If you have any questions,
please don't hesitate to ask me. We would also like to hear from you if you
end up using our software!
From Paul Bovbel via ros-users@

As part of a summer hack project at Clearpath Robotics, I've released a ROS package [http://wiki.ros.org/vrpn_client_ros] wrapping the VRPN client library. This package provides support for exposing information on VRPN Tracker devices (pose, velocity, acceleration data) into ROS. 

We've tested it in house with VICON and OptiTrack mocap systems - please keep in mind that neither system exposes velocity or acceleration data over VRPN. Any feedback, bug reports, or validation is greatly appreciated!

A list of VRPN supported devices can be found here [https://github.com/vrpn/vrpn/wiki/Supported-hardware-devices]

ROS package for Basler cameras

| No Comments | No TrackBacks
From Beatriz Leon Pinzon via ros-users@

We have created a ROS package to be able to publish images using a Basler camera. We have a Basler Dart camera but the package should work with any of their cameras as all use the same API.

Here is the github address:
https://github.com/shadow-robot/basler_camera

Their cameras are not yet uvc compatible so this package would be helpful in the meantime.
From Mani Monajjemi via ros-users@

I would like to announce the release of "bebop_autonomy" package: ROS driver for Parrot Bebop Drone [1].


This driver is based on Parrot's official ARDroneSDK3 [2]. As of this release, it provides interfaces for piloting the drone, subscribing to its camera and on-board sensory data as well as tweaking its configuration.

Happy flying!

Mani Monajjemi
AutonomyLab, Simon Fraser University

PS. Feel free to join the discussion on feature development of this driver (and ardrone_autonomy) here: https://trello.com/b/C6rNl8Ux 

From Mark Silliman


Liatris is a new open source project built with ROS.  Liatris determines any object's identity and precise pose using a touch screen and RFID reader.


The robot in the video is programmed to identify and grasp any object placed on the touch screen, regardless of the object's shape, size or positioning. Liatris can immediately identify the object and determine its orientation using capacitive touch and RFID technology. It discerns the object utilizing the CAD model downloaded from the Liatris API and uses instructions provided in the API to physically grasp the object. The instructions define the optimal way in which the robot should grasp a specific object while avoiding collisions with other objects (thanks to MoveIt.)

The result is an accurate 3D perception and mobile manipulation solution.


Learn more at Liatris.org

roslint update

| No Comments | No TrackBacks
From Mike Purvis via ros-users@

roslint has been updated to the newest versions of its underlying linters, pep8 and cpplint, thanks to some work by Alex Henning. Version 0.10.0 has been released into Indigo and Jade and will be available in shadow shortly. Relevant PRs:


If you are the maintainer of one of these roslint-using packages, be aware that this change may result in new lint warnings on your package. Especially if you use the roslint_add_test macro to run the linter as part of a package's unit tests, you may want to grab roslint 0.10.0 from shadow so you can verify packages in advance of the next sync.

If you're not currently a roslint user, but develop (or maintain) ROS packages, consider integrating roslint. We use it on a bunch of our internal software at Clearpath-- having stuff linted upfront is great for making code review about the real design and implementation issues, and not about style.

New Tool: roscompile - Catkin Metadata Helper

| No Comments | No TrackBacks
From David Lu! via ros-users@

I have a confession to make. I'm not very good at Catkin. One reason
is because there is a lot more metadata to maintain. Unlike Ye Olde
Rosbuild, where you could add a dependency by adding a single tag in
the manifest, Catkin requires you add the build and run dependencies
to the package.xml, as well as add the dependencies in the
CMakeLists.txt in a couple places.

That's why I've developed a tool called roscompile, which I've found
invaluable for cleaning up my packages for release. It attempts to do
the hard work for you by 'compiling' the information that already
exists in the package.

 * Did you just add a dependency on a new package in your source code?
roscompile will read the source and add the appropriate tags to your
package.xml and CMakeLists.txt.

 * Create a new launch file that uses map_server? roscompile reads
launch files to add run_depends.

 * Add a new msg/srv/action/dynamic_reconfiguration/plugin? roscompile
generates the metadata for that too.

Check it out here: https://github.com/DLu/roscompile
(contains a full list of features and issues page)

Of course, the tool is far from perfect. It should not be used to
blindly make changes to critical repos. I welcome collaboration to
help cover people's use cases other than my own.
From Andreas ten Pas via ros-users@


despite being available for quite a while, I wanted to officially announce our ROS Hydro/Indigo package for localizing grasps in 3D point clouds: http://wiki.ros.org/agile_grasp 

Here's a demo of Rethink's Baxter robot localizing and executing grasps in a densely cluttered scene.  

Instructions for using our package are available at the ROS wiki page given above.

If you find any problems, please report them at: https://github.com/atenpas/agile_grasp/issues
From Alessio Levratti via ros-users

I developed a new node for skeleton tracking for the ASUS Xtion Pro Live by editing openni2_tracker
The main differences are:
  • The node publishes a new message (user_IDs) containing the ID of the tracked user
  • The node publishes the video stream captured by the Xtion
  • The node publishes the Point Cloud captured by the Xtion

The package can be downloaded here: https://github.com/Chaos84/skeleton_tracker.git
Just type:
    $ git clone https://github.com/Chaos84/skeleton_tracker.git

From Vincent Rabaud via ros-users@

On behalf of Aldebaran and SoftBank Robotics, I am pleased to announce official ROS support for the Pepper robot. A local bridge with its NAOqi software is provided for all it sensors as well as an accurate URDF and meshes. Please find more instructions and tutorials on the ROS wiki page at http://wiki.ros.org/Robots/Pepper

Other good news: Aldebaran is now also providing an official C++ bridge with its NAOqiOS. It is pure open source: under the Apache 2.0 license with shared maintainership with the community.

As usual, let's discuss all that on the SIG.

Enjoy !
The Aldebaran team
There is now a ROS  package which provides a bridge between ROS and the OpenHAB home automation system.
OpenHAB is an open source system that connects to virtually any intelligent device, such as smoke detectors, motion detectors, temperature sensors, security systems, TV/audio, fingerprint scanners, lighting, 1-Wire, Wemo, CUPS, DMX, KNX, openpaths, Bluetooth, MQTT, Z-Wave, telephony, Insteon, weather sensors.  OpenHAB also connects to web services such as Twitter, Weather, etc. In addtion, ROS provides a basic Web GUI and Iphone/Android app for setting and dynamically viewing values.  openhab.org/features
Give your robot knowledge of the wider world
Use Cases:
  • A motion detector or smoke detector in OpenHAB triggers and ROS dispatches the robot to the location.
  • ROS facial recognition recognizes a face at the door and OpenHAB unlocks the door.
  • A Washing Machine indicates to OpenHAB that the load is complete
    and ROS dispatches a robot to move 
    the laundry to the dryer.
  • The OpenHAB MQTT binding indicates that Sarah will be home soon and a sensor indicates that the temperature is hot. ROS dispatches the robot to bring Sarah's favorite beer. OpenHAB turns on her favorite rock music and lowers the house temperature.
  • A user clicks on the OpenHAB GUI on an IPAD and selects a new room location for the robot for telepresence. The message is forwarded by the openhab_bridge to ROS and ROS dispatches the robot.
  • A sentry robot enters a dark area and sends a command to OpenHAB to turn on the lights in that area.
With the openhab_bridge, virtually any home automation device can be easily setup to publish updates to the openhab_updates topic in ROS, giving a ROS robot knowledge of any Home Automation device as well as a number of web services. ROS can publish to the openhab_set topic and the device in OpenHAB will be set to the new value (for example setting a Robots position in OpenHAB).  ROS can also publish to the openhab_command topic and the device in OpenHAB will  act on the specified command (for example turning on a light).
To download and for more information:

Darwin OP package for ROS/Gazebo available

| No Comments | No TrackBacks
From Philippe Capdepuy via ros-users@

Dear ROS users,

We just published 3 packages for simulating the Darwin OP robot on Gazebo (or to use with the real robot but with some extra work):
 - https://github.com/HumaRobotics/darwin_gazebo
 - https://github.com/HumaRobotics/darwin_description
 - https://github.com/HumaRobotics/darwin_control
They have been tested on both Hydro and Indigo, but they probably work for other distributions.

We also provide a user-friendly Python API with walking capabilities.

A quick tutorial and demo can be found here:
http://www.generationrobots.com/en/content/83-carry-out-simulations-and-make-your-darwin-op-walk-with-gazebo-and-ros

Credits also go to Taegoo Kim and Bharadwaj Ramesh for the meshes and original URDF on which this work was based.

Enjoy!

DUO3D ROS node release

| No Comments | No TrackBacks
From Krystian Gebis via ros-users@

Hello all,

Recently, I have been working on creating a ROS driver for the DUO3D camera. After working with the DUO team, I have managed to wrap ROS around the DUO API functions, and have the camera images be published as ROS messages' of type sensor_msgs::Image. For those of you who do not know about DUO3D, it is a new, relatively inexpensive, stereoscopic camera which allows for many different custom solutions such as better lens, wider baseline, etc. More information about DUO3D can be viewed here: https://duo3d.com/ . 

For those of you who are interested, here is a link to the github repository where I have developed the duo3d_camera node: https://github.com/l0g1x/DUO-Camera-ROS

As for now, I am still in the process of talking with DUO on how to package their shared libraries into a Debian package, so once I get that figured out with them, I will try to release the first version to the ROS repo.

I welcome everyone to give me feedback, as this is my first contribution back to the ROS community.
From Pouyan Ziafati via ros-users@

Dear All,

I am happy to announce  the release of the retalis package for ROS. The Retalis language  supports a high-level and an efficient implementation of a large variety of robotic sensory data processing and management functionalists. 

Please see the description, tutorial and performance evaluation at  http://wiki.ros.org/retalis

Best regards,

Updated package: razor_imu_9dof

| No Comments | No TrackBacks
From Kristof Robot via ros-users: 

I am happy to announce Hydro and Indigo versions of razor_imu_9dof, a
package that provides a ROS driver for the Sparkfun Razor IMU 9DOF
(http://wiki.ros.org/razor_imu_9dof).
It allows assembling a low cost Attitude and Heading Reference System
(AHRS) which publishes ROS Imu messages for consumption by packages
like robot_pose_ekf.

Major updates (see Changelog [1] for details):
- catkinized
- upgraded to be fully compatible the ROS navigation stack (and in
particular robot_pose_ekf)
- major upgrade of the wiki documentation (http://wiki.ros.org/razor_imu_9dof)

Video demonstrating the use of razor_imu_9dof with robot_pose_ekf to
improve odometry -  .

For more information, and detailed instructions, see
http://wiki.ros.org/razor_imu_9dof.

I'd like to thank Tang Tiong Yew for the good work on the previous
Fuerte and Groovy versions, and Peter Bartz for the excellent
firmware.
Last but not least, a big thanks to Paul Bouchier, who triggered this
upgrade, and was a major contributor overall.

Enjoy!

Kristof Robot

[1] http://docs.ros.org/indigo/changelogs/razor_imu_9dof/changelog.html

A rosbag implementation in Java

| No Comments | No TrackBacks
From Aaron Schiffman via @ros-users

Dear ROS-Users,

I've created a rosbag writer implementation for Java, and posted it to a new BitBucket repository. It should be functional for writing a rosbag format 2.0 none compression rosbag in Android or Java ROS implementations (Client or Server). If your interested the source repository is located at:
 
 
 
 
 
 
aaron_sims / jrosbag / source / -- Bitbucket
Source Branch master Check out branch jrosbag /
Preview by Yahoo
 
How to use the Bag class:
  1. Initialize the org.happy.artist.rmdmia.utilities.ros.bag.Bag class.
  2. Call bag.start(OutputStream os, Bag.CHUNK_COMPRESSION_NONE)); // Where os is the OutputStream you intend to write the file. Examples could be a FileOutputStream, or a network output stream that writes the file to Google Drive, or Dropbox (just examples).
  3. Call bag.addConnectionHeader(char[] topic, int conn, char[] connection_header_hex); for each new connection header on connection handshake. int conn is a unique int connection id chosen for the connection (might be a good idea to iterate through topic ids to create an int array, or other mechanism to chose a unique int. connection_header_hex will be the ROS Serialized Message in the connection header.
  4. Call bag.addMessage(long time, int conn, char[] message_data_hex); Pass in the long time, associated connection header int conn id, and the ROS serialized message to add to the rosbag file.
This Java code is poorly documented, however, I wanted to share it with ROS Community for Java/Android ROS clients that want to record rosbag files. Good luck using it. Releasing under Apache 2.0 license.

I wish I had more time to clean up the code better, and if you have questions or want to contribute send me a message.

Thanks,

Aaron 

New Package: mongodb_store

| No Comments | No TrackBacks

From Nick Hawes via ros-users@

I would like to announce the release of a new suite of tools to enable the persistent storage, analysis and retrieval of ROS messages in a MongoDB database.

The mongodb_store package:

http://wiki.ros.org/mongodb_store

... provides nodes to store arbitrary ROS messages in a MongoDB database, query the database and retrieve messages, with helper classes in C++ and Python. Nodes are also available to provide rosbag-like functionality using the [same db format] (http://wiki.ros.org/mongodbstore#LoggingofTopics:mongodblog) and also [parameter persistence across system runs] (http://wiki.ros.org/mongodbstore#Parameterpersistence:config_manager.py).

Packages are available on Ubuntu for Indigo and Hydro, e.g.

  • ros-indigo-mongodb-log - The mongodb_log package
  • ros-indigo-mongodb-store - A package to support MongoDB-based storage and analysis for data from a ROS system, eg. saved messages, configurations etc
  • ros-indigo-mongodb-store-msgs - The mongodbstoremsgs package

These tools were developed by the STRANDS project to support the development, debugging and runtime introspection of long-term autonomous mobile robots, but we hope they will be useful to the ROS community more generally.

In the near future we plan to release tools for serving maps from mongodb_store and for logging streams of RGB-D data in a compressed format .

Note that there is an overlap in functionality between these tools and warehouse-ros. We developed our own solution as the existing packages appeared to be unsupported and special-purpose, but as this appears to be changing, we may want to look at combining these two packages.

For feedback, pull requests, feature requests and bug reports please go to: https://github.com/strands-project/mongodb_store/issues

From Paul Hvass, via ros-users@

Robotics and automation systems are increasingly reliant on both 2D and 3D imaging systems to provide both perception and pose estimation. Calibration of these camera/robot systems is necessary, time consuming, and often a poorly executed process for registering image data to the physical world. SwRI is continuing to develop the industrial calibration library to provide tools for state-of-the-art calibration with the goal to provide reliably accurate results for non-expert users. Using the library, system designers may script a series of observations that ensure sufficient diversity of data to guarantee system accuracy. Often interfaces to motion devices such as robots may be included to fully automate the calibration procedure.

More information can be found on the ROS-I blog post.

From Andreas Bihlmaier via ros-users@

Dear ROS community,

I'm pleased to announce http://wiki.ros.org/arni - a collection of tools for Advanced ROS Network Introspection.
From the wiki page:
"Advanced ROS Network Introspection (ARNI) extends the /statistics
features introduced with Indigo and completes the collected data with
measurements about the hosts and nodes participating in the network.
These are gathered from an extra node that has to run on each host
machine. All statistics or metadata can be compared against a set of
reference values using the monitoring_node. The rated statistics allow
to run optional countermeasures when a deviation from the reference is
detected, in order to remedy the fault or at least bring the system in a
safe state."

No modification of existing nodes is required in order to use the
monitoring features. Therefore, the barrier of entry is very low:
- See the arni tutorial
or
- git clone https://github.com/ROS-PSE/arni into your catkin_ws
- roslaunch arni_core init_params.launch
- start all your other nodes
- rosrun rqt_gui rqt_gui
- Plugins -> Introspection -> Arni-Detail
  (Click on an item (host, node, topic or connection) in the tree view
  to get more details and graphs in the other widget)
- Enjoy out of the box distributed metadata-based monitoring

If you want to use the more advanced features in your own ROS network,
see the documentation on how to write "specifications" and "constraints".

The documentation can be found in the wiki including the tutorials (http://wiki.ros.org/arni/Tutorials).

Please give feedback and report any bugs found.


Many thanks to my students that worked hard on this:
Matthias Hadlich, Matthias Klatte, Sebastian Kneipp, Alex Weber, Micha Wetzel
From Georg Heppner via ros-users@

Hi everyone,


it is my pleasure to announce the schunk_svh_driver[1] package that you can use to control the Servo-electric 5-Finger Gripping Hand SVH [2] produced by Schunk.

The SVH is the first 5 finger hand which is produced in series and enables a wide range of complex motions due to its 1:1 scale and anthropo­morphic design. It provides an easy interface for standalone usage as well as integration into your project, comes with a detailed 3D-Model based on the orginal CAD Data and was tested extensively during several public demonstrations like the Automatica. A comprehensive documentation is already provided on the wiki and should allow you to easily use the package in your projects. At [3] you can see a Youtube video of the hand in combination with the LWA4P for which an early version of this package was used.

The package is currently available via git [4] and will soon be available via package manager. It was tested with hydro and indigo but should work under most circumstances.

Please let me know if you have any feedback, suggestions or any trouble using the package.


Best Regards
Georg Heppner

[1] http://wiki.ros.org/schunk_svh_driver
[2] http://mobile.schunk-microsite.com/en/produkte/produkte/servoelektrische-5-finger-greifhand-svh.html
[3] https://www.youtube.com/watch?v=hPtSbPzROrs
[4] https://github.com/fzi-forschungszentrum-informatik/schunk_svh_driver

New Package: diff_drive_controller in ros_controllers

| No Comments | No TrackBacks
From Bence Magyar of PAL Robotics via ros-users@

Hi everyone,

PAL Robotics is pleased to announce the release of the diff_drive_controller that became available in Hydro and Indigo in the first quarter of 2014.

For those who already know it, I'd like to ask you to add your robot(s) to the wiki page with a moderately sized image and name: http://wiki.ros.org/diff_drive_controller#Robots.

For those who are new to it,
For documentation refer to:
http://wiki.ros.org/diff_drive_controller

As the name suggests, this controller moves a differential drive wheel base. 
Features:
  • The controller takes geometry_msgs::Twist messages as input.
  • Realtime-safe implementation.

  • Odometry computed and published from open or closed loop
  • Task-space velocity and acceleration limits
  • Automatic stop after command time-out
The controller will soon support skid steer platforms as well. 

Cheers,

New package: Augmented Reality System

| No Comments | No TrackBacks
From Hamdi Sahloul via ros-users@

Hi everyone!

I have been recently through a need for a reliable pose estimation system, in which ar_pose (http://wiki.ros.org/ar_pose) failed to stratify my needs as it depends on the very basic ARToolKit old library.
Moreover, I found aruco_ros (http://wiki.ros.org/aruco_ros) as a good package to begin with, but it was only using a single marker, or double markers. It does not have a visualization system as well.

So, I made my package..
In order to avoid occlusions, I used marker boards (you still have the ability to use a 1x1 marker board), and now it could detect virtually unlimited boards with a very good accuracy.
Nonetheless, it is able to handle many cameras at once, and finally display the result in the rviz (http://wiki.ros.org/rviz).

I would love if you discover things further yourself, so here is the link:


It would only cost you a camera and couple of papers to try, therefore, kindly be asked to try it and let me please know your impression and feedback which is highly appreciated!

Microsoft Kinect v2 Driver Released

| No Comments | No TrackBacks
From Thiemo and Alexis via ros-users@

Dear ROS Community,

I am Thiemo from the Institute for Artificial Intelligence at the University of Bremen. I am currently a PhD Student under the supervision of Prof. Michael Beetz. I'm writing this together with Alexis Maldonado, another PhD Student at our lab, who has helped mainly with the hardware aspects.

In the past few months I developed a toolkit for the Kinect v2 including: a ROS interface to the device (driver) using libfreenect2, an intrinsics/extrinsics calibration tool, an improved depth registration method using OpenCL, a lightweight pointcloud/images viewer based on the PCL visualizer and OpenCV.

The system has been developed for and tested in both ROS Hydro and Indigo (Ubuntu 12.04 and 14.04)

The driver has been improved to reach high performance, meaning to be able to process the sensor's information at full framerate (30Hz) on acceptable hardware (not only high-end machines). This was achieved through parallelization of the image pipeline. Care has also been taken to be able to transfer the complete data over compressed topics to other PCs (30Hz data uses approx. 40Mbytes/s on the network).

Specially interesting for other people with a PR2 robot: we have built a small mITX computer using an AMD A10-7850K processor, and a PicoPSU. It is installed as a backpack on our PR2, and a Kinect v2 on the head above the cameras. This 'backpack-PC' is necessary because the built-in computers on the PR2 don't support USB3 and they are quite loaded with their normal workload.

We are glad to announce the release of the software for ROS community, hoping it will be useful for others, specially people working in robotics research. Please see the following GitHub repository:

  https://github.com/code-iai/iai_kinect2

You will need a slightly patched version of libfreenect2, as indicated on the README. It is here:
  https://github.com/wiedemeyer/libfreenect2

Screenshots are also on the GitHub page.

We are looking forward to improvements and/or bug reports. Please use the GitHub tools for that.

Best regards,

Thiemo and Alexis

Institute for Artificial Intelligence
University of Bremen

New ROS package available for the Barrett Hand

| No Comments | No TrackBacks
From Román Navarro García via ros-users@

Hi Everyone,

We're pleased to announce a new package for the Barrett Hand BH8-28X

This package allows the control of the hand either in velocity or position, and reading the current state of the joints and the sensors (fingertip torque and tactile sensors).

The software includes packages with the model description and a graphical interface (rqt) to interact with the hand.

Links:

http://wiki.ros.org/Robots/BarrettHand -> Technical description
http://wiki.ros.org/barrett_hand -> ROS package description 


Groovy and Hydro are currently supported, Indigo soon.

If you are interested in verifying all these features of the hand, you can visit us from 14th until 18th of September in booth nº303 at IROS 2014.

Best regards,

STDR Simulator v0.2 released

| No Comments | No TrackBacks
From Manos Tsardoulias 

Dear all,

We are happy to announce that the current version of STDR Simulator is 0.2! The changes compared to the v0.1.3 follow:

  • Several bugs were fixed
  • Code was refactored
  • Lidar resources were added
  • Added support of:
    • RFID tags and Readers
    • Thermal sources / sensors
    • CO2 sources / sensors
    • Sound sources / sensors

Special thanks to Sergey Alexandrov and Scott K Logan for code contributions.

Our future plans:
  • Make the sensor measurements more realistic
  • Add simulated battery in robots
  • Detection of robots footprint via other robots' distance sensors
  • Add a simple physics engine
It would be excellent if any of you would like to contribute either by code developmentissue reporting or features request!

Best,
The STDR team.

New Package: Behavior Trees pi_trees

| No Comments | No TrackBacks
From Patrick Goebel via ros-users@

Hello ROS Fans,

I have created a ROS package implementing behavior trees called pi_trees.  It is written in Python and is modeled after the most excellent executive_smach package (though without the visualizer).  The only documentation I have so far is a PDF which was copied out of a chapter from my latest ROS book mentioned earlier on the list.

The package consists of a standalone Python module and a ROS wrapper for connecting to ROS topics, services and actions.

Hopefully someone will find the package useful.  And if anyone can find problems with the code or a better way of doing things, I'd love to hear it.

New Package: rqt_ez_publisher

| No Comments | No TrackBacks
From Takashi Ogura via ros-users@

I released rqt_ez_publisher for hydro and indigo.
rqt_ez_publisher automatically creates GUI for publishing topics.
(It is a plugin for rqt, which is standard GUI tool of ROS.)

It is similar to rqt_reconfigure, which is for parameters, but rqt_ez_publisher
is for topics.
Although rqt_reconfigure requires some config files,
rqt_ez_publisher needs nothing. All you have to do is select a topic from list.

This video shows how it works: https://www.youtube.com/watch?v=oajlOQfqJiw


For more detail, please read wiki page.

Any feedback is welcome. Please make a issue at GitHub.

New Package: MAVLink to ROS generator

| No Comments | No TrackBacks
From Pedro Marques da Silva via ros-users@

Hi all :)


I am pleased to announce that I created a kind of Mavlink generator for ROS.

You can see the pre-release here: https://github.com/posilva/mav2rosgenerator and download from here:https://pypi.python.org/pypi/mav2rosgenerator/0.1.2

I hope this could be helpful to the ROS guys to start using mavlink to control robots.

Best 
From Ted Larson

OLogic has been involved with Project Tango since the very beginning of the project, however we have always had our eyes on the goal of utilizing it for robotics applications.  The solution to the problem of indoor localization and mapping is one of several areas Project Tango is focused on, and when you overlap this with robotics, it is a perfect fit.    Google has provided several SDK's for working with project Tango in either Java, C, or Unity, and has shown some impressive demos using sparse mapping under Unity, to navigate around 3D virtual worlds, or games on the device.  The phone has the ability to perform Visual Inertial Odometery (VIO), and we wanted to extend this to use within the context of ROS.  We wrote some ROSJava Nodes that use the SDK to access the VIO to publish pose, transform frames (tf), and odometry messages.  This allowed us display a URDF of a floating phone on a map, in RViz and show the position information of where the phone is located in the office, in near-real-time.  We have several demo videos of our summer intern, roaming around the office with a Project Tango phone, while we visualize the phone's position and orientation in 3D space.  It is just a starting point for all the things we want to do with Project Tango and ROS, but we have a good framework in place to add other nodes into the puzzle, and get to the point soon where we will be able to navigate a robot around the office with only a Project Tango phone for the brains.  The project is available via a public project on Github https://github.com/ologic/Tango and all the build instructions for getting it running on a Tango device is there via the Wiki.  There are lots of helpful hints and tips on building 3D maps using the Tango Mapper application (the one that Google provides), and then taking those maps and bringing them into ROS to try to navigate a space using an existing ROS robot.   We will be adding to the project continually, as it is still definitely a work-in-progress.


Robopeak announces ROS Drivers for the RPLIDAR

| No Comments | No TrackBacks
From ShiKai Chen

Descriptions & Images:
=====================
RPLIDAR is a low cost LIDAR sensor suitable for indoor robotic SLAM application. It provides 360 degree scan field, 5.5hz rotating frequency with guaranteed 6 meter ranger distance.  By means of the high speed image processing engine designed by RoboPeak, the whole cost are reduced greatly, RPLIDAR is the ideal sensor in cost sensitive areas like robots consumer and hardware hobbyist.

The RPLIDAR core engine performs high speed distance measurement with more than 2000 samples per second. For a scan requires 360 samples per rotation, the 5.5hz scanning frequency can be achieved. Users can customized the scanning frequency from 2hz to 10hz freely by control the speed of the scanning motor. RPLIDAR will self-adapt the current scanning speed.

ROS Node:
=====================

Videos:
====================
Odometer Free Hector map building using RPLIDAR

Scan Record:

About RoboPeak
~~~~~~~~~~~~~~~
RoboPeak is a research & development team in robotics platforms and applications, founded in 2009. Our team members are Software Engineers, Electronics Engineers and New Media Artists that all come from China.

RoboPeak develops both software and hardware, which include personal robotic platforms, Robot Operating System and related devices.

Our vision is to enrich people's daily-life with the ever-changing development and innovation in robotic technologies.

New Package: handle_detector

| No Comments | No TrackBacks

From Andreas ten Pas via ros-users@

Hi all,

although it has been available in the packages for quite a while, I wanted to officially announce our ROS Hydro package for localizing handles in 3D point clouds: http://wiki.ros.org/handle_detector

You can see a demonstration of the localization on Rethink Robotics' Baxter robot that is clearing several objects from a table in this video:

A tutorial for using our software is available at the ROS wiki page given above.

If you find any problems, feel free to report them at: https://github.com/atenpas/handle_detector/issues

All the best,

Andreas

New Package: robot_localization

| No Comments | No TrackBacks

From Tom Moore via ros-users@

I am pleased to announce the release of a new ROS package, robot_localization. The package estimates the state (3D pose and velocity) of a mobile robot through sensor fusion. Its features include:

* Fusion of an arbitrary number of sensors: the nodes do not restrict the number of input sources. If, for example, your robot has multiple IMUs or multiple sources of odometry information, the nodes within robot_localization can support all of them.

* Support for multiple ROS message types: all nodes in robot_localization can take in Odometry, Imu, PoseWithCovarianceStamped, or TwistWithCovarianceStamped messages.

* Per-sensor input customization: if a given sensor message contains data that you don't want to include in your state estimate, robot_localization's nodes allow you to exclude that data on a per-sensor basis.

* Continuous estimation: each node in robot_localization begins estimating the robot's state as soon as it receives a single measurement. If there is a holiday in the sensor data (i.e., a long period in which no data is received), the filter will continue to estimate the robot's state via a 3D motion model.

robot_localization currently contains only one node, ekf_localization, which, as the name implies, employs an extended Kalman filter. New nodes, such as an unscented Kalman filter node, will be added as they become available.

robot_localization is currently available for ROS Groovy, Hydro, and Indigo. The package's wiki page athttp://wiki.ros.org/robot_localization provides more details on how to integrate it with your robot. 

Development of this node was funded by Charles River Analytics, Inc.

From Dmitry Berenson via ros-users@

The ARC Lab at WPI is releasing the Datalink Toolkit ROS package, designed to for remote operation of a robot over a high-latency and low-bandwidth datalink. The package was developed and extensively tested as part of the DARPA robotics challenge, though it is not specific to a type of robot.

The package allows the user to easily set up relays and compression methods for a single-master system. These relays avoid duplicating data sent over the datalink while compressing common datatypes (i.e. point-clouds and images) to minimize bandwidth usage.
The toolkit includes both message-based and service-based relays so that data can be sent on-demand or at a specified frequency. The service-based relays are more robust in low-bandwidth conditions, guaranteeing the synchronization of camera images and camera info messages, and allow more reconfiguration while running.

The key features of the package are:
- Generic relays with integrated rate throttling for all message types
- Dedicated relays with rate throttling for images and pointclouds
- Generic service-based relays with integrated rate throttling for all message types
- Dedicated service-based relays with integrated rate throttling for images and pointclouds
- Image resizing and compression using methods from OpenCV and image_transport
- Pointcloud voxel filtering and compression using methods from PCL, Zlib, and other algorithms. (Note: pointcloud compression is provided in a separate library that can be easily integrated with other projects)
- Launch files for easy use of the datalink software with RGBD cameras
- Works with ROS Hydro

Overall performance:
- Reliable data transfer for a wide range of bandwidths and latencies (e.g. at DRC Trials: 1Mb/s - 100 Kb/s bandwidth, 100ms - 1000ms latency)
- Pointcloud compression >8x depending on compression algorithm (without voxel filtering)
- Pointcloud compression >20x depending on compression algorithm (with voxel filtering)
- Image compression equivalent to image_transport (without image resizing) or better (with resizing)

Performance comparison with ROS for image transfer:
- 1.5x more images/second at 1Mb/s (grayscale image size 320x240)
- 2x more images/second at 100Kb/s (grayscale image size 320x240)
- 3x more images/second at 50Kb/s (grayscale image size 320x240)


For more information, please see the wiki here:

Get the package from our git repository here:

New Package: moveit_visual_tools

| No Comments | No TrackBacks
From Dave Coleman via moveit-users@

Greetings,

I'd like to announce MoveIt! Visual Tools - a new tool that will hopefully speed up your development time by providing easy to use Rviz markers and robot display tools for debugging and visualization. It is sometimes hard to understand everything that is going on internally with MoveIt!, but using these quick convenience functions allows one to easily visualize their code. 

This package includes:
  • Basic geometric markers for Rviz
  • MoveIt! collision object tools
  • Trajectory visualization tools
  • Robot state tools. 
See the Github README for full documentation. This will be available as an Ubuntu debian next Hydro update. 

I encourage everyone to share their MoveIt! work to the community as well, thanks!

New package: Frontier Exploration

| No Comments | No TrackBacks
From Paul Bovbel via ros-users@

Hello ros-users,

This package implements frontier exploration using an action server (explore_server), that can be controlled from rviz via explore_client, or directly from other nodes.

When starting out with ROS, I was frustrated that there was no (maintained) exploration package that worked solely using the core ROS APIs (i.e. navigation).

Internally, this package contains a custom costmap_2d layer plugin that could be adapted for more complex exploration strategies.

Please email or post any feedback, comments or concerns!

New version of STDR SImulator

| No Comments | No TrackBacks
From Manos Tsardoulias via ros-users@

The current version of STDR Simulator is 0.1.3! The changes compared to the v0.1.0 follow:
  • Full support of robots with polygonal footprint
  • Zoom in STDR GUI is also performed with the mouse wheel
  • Fixed saving and loading robots and sensors from the Robot Creator in GUI
  • Added odometry publisher
  • Added robot-to-obstacles collision check
Special thanks to trainman419 for contributions in 
  • the polygonal robot support and the odometry publisher
  • GUI makefiles
  • writing a tutorial on robot teleoperation with STDR using teleop_twist_keyboard.

The next version (v0.1.4) will include full support of RFID tags and RFID reader sensors.

New Package nav2d

| No Comments | No TrackBacks
From Sebastian Kasperski via ros-users@

Hello ROS users,
 
I would like to share a set of ROS packages that provide nodes for autonomous exploration and map building for mobile robots moving in planar environments. More information and some help can be found in the ROS-Wiki:
 
The source is available via Github:
 
It contains ROS nodes for obstacle avoidance, basic path planning and graph based multi-robot mapping using the OpenKarto library. Autonomous exploration is done via plugins that implement different cooperation strategies. Additional strategies should be possible to implement with only little overhead.
 
These nodes have been used on a team of Pioneer robots, but other platforms should also do. A set of ROS launch files is included to test the nodes in a simulation with Stage. Please feel free to try it and post issues on Github.

New Package: catkin_lint

| No Comments | No TrackBacks
From Timo Röhling via ros-users@

I have created a tool to check catkin packages for common build
configuration errors. I announced it to the ROS Buildsystem SIG a while
ago, and I think it is ready for public scrutiny:

Source: https://github.com/fkie/catkin_lint
PyPI Package: https://pypi.python.org/pypi/catkin_lint
Ubuntu PPA: https://launchpad.net/~roehling/+archive/latest

It runs a static analysis with a simplified CMake parser. Among the
checks are order constraints of macros, missing dependencies, missing
files, installation of targets and headers, and a few other things. The
checks are inspired by the catkin manual and issues I encountered in my
daily work routine.

Give it a try and feel free to post any issues on Github.

New Package: ROS Glass Tools

| No Comments | No TrackBacks
From Adam Taylor via ros-users@

We would like to announce ros_glass_tools, an open source project that aims to provide easy voice control, topic monitoring, and background alerts for robot systems running ROS using the Google Glass.  It communicates with ROS using the rosbridge_suite.  


More information about the tools can be found at the following links.


New Package: Announcing ROS/DDS proxies

| No Comments | No TrackBacks
From Ronny Hartanto of DFKI GmbH via ros-user@

Hi Everyone,

We are happy to announce the ros_dds_proxies:


As recently, there was some discussion on using DDS as communication layer in ros. This package contains our implementation on using DDS middleware for a multi-robot systems. We have been successfully using this implementation in our project (IMPERA). In our experiments, all the messages were successfully delivered to all robots, even with communication outage for about 15 minutes. 

Any comment or improvement are welcome.

From Angel Merino Sastre & Simon Vogl via ros-users@

Hi all,

We are happy to announce the sentis-tof-m100 ros package:

https://github.com/voxel-dot-at/sentis_tof_m100_pkg

This package provides support for the Bluetechnix Sentis ToF M100 camera
based on the software API that is provided with the camera, along with
a detailed installation how-to and a ready-to-use launch file with a
visualization example based on rviz.


Any comment/suggestions are welcome

Introducing ROStful: ROS over RESTful web services

| No Comments | No TrackBacks
From Ben Kehoe via ros-users@

Hello all,
ROStful is a lightweight web server for making ROS services, topics, and actions available as RESTful web services. It also provides a client proxy to expose a web service locally over ROS.

Here at Berkeley we are working to bring Software as a Service (SaaS) paradigms into robotics. We have created ROStful as a starting point for creating SaaS tools using existing ROS services and actions.

ROStful web services primarily use the rosbridge JSON mapping for ROS messages. However, binary serialized ROS messages can be used to increase performance.

The purpose of ROStful is different from rosbridge: rosbridge provides an API for ROS through JSON using web sockets. ROStful allows specific services, topics, and actions to be provided as web services (using plain get and post requests) without exposing underlying ROS concepts.
The ROStful client proxy, however, additionally provides a modicum of multi-master functionality. The client proxy is a node that connects to a ROStful web service and exposes the services, topics, and actions locally over ROS.

The ROStful server is WSGI-compatible and can therefore be used with most web servers like Apache and IIS.

Try it out (there are no dependencies!), and let us know what you think! https://github.com/benkehoe/rostful

Two minor notes:
Serialized ROS messages are sent with the MIME type 'application/vnd.ros.msg'. If there's a standard anyone else is using, let us know.
In the absence of a standard component description format, ROStful uses an INI-based format that may be of interest for creating such a standard description. See here for details.

STDR Simulator (Simple Two Dimensional Robot Simulator)

| No Comments | No TrackBacks

From Manos Tsardoulias

stdr.png


Dear all,


We are happy to announce the first release (v0.1) of STDR Simulator (Simple Two Dimensional Robot Simulator) ROS package. It is a fact that a variety of robot simulators is available. Some characteristic examples are the Player/Stage/Gazebo project, USARSim, Webots, V-REP and many many others. We acknowledge that these frameworks are state-of-the-art and provide a vast amount of services, ranging from realistic 3D simulation to hardware support. Though the prize you ought to pay is that they are either extremely architecturally complicated and confuse the novice robotics researcher or they require a lot of computational power to provide realistic 3D simulation. In addition, almost all of the pre-mentioned frameworks have a lot of dependencies that make the installation procedure time consuming and sometimes impossible due to dependency errors. What we envisioned was a simple simulator that its installation wouldn't require more than a few clicks, one that would allow the robotic researcher to materialize their ideas in a simple and efficient manner.


That is why we decided to create STDR Simulator. STDR Simulator's two main goals are:

  • It doesn't aim to be the most realistic simulator, nor the one with the most functionalities. Our intention is to make a single robot's, or a swarm's simulation as simple as possible, by minimizing the needed actions the researcher has to perform to start theirs experiment. In addition, STDR can function with or without a graphical environment, which allows for experiments to take place even using ssh connections.  

  • STDR Simulator is created in a way that makes it totally ROS compliant. Every robot and sensor emits a ROS transformation (tf) and all the measurements are published in ROS topics. In that way, STDR uses all ROS advantages, aiming at easy usage with the world's most state-of-the-art robotic framework. The ROS compliance also suggests that the Graphical User Interface and the STDR Server can be executed in different machines, as well as that STDR can work together with ROS Rviz!


We hope that STDR Simulator will be useful to the beginner robotics researcher aiming at comprehending the area, as well as to an advanced roboticist who wants to try his ideas in mapping, navigation or path planning.


STDR Simulator is - of course - open source and can be downloaded from our Github page (https://github.com/stdr-simulator-ros-pkg/stdr_simulator) or through the official ROS binary distributables (ros-hydro-stdr-simulator). Since this release is the initial one we expect things to not fully work, so it would perfect if you could provide us with some comments / suggestions / bugs that you discover. Our bug tracker is:


https://github.com/stdr-simulator-ros-pkg/stdr_simulator/issues.


Finally you can find a detailed description of STDR Simulator in our wiki page or our website:



The development team:

  • Manos Tsardoulias (administrator/developer), PhD in Electrical Engineering (etsardou [at] gmail [dot] com)

  • Chris Zalidis (maintainer/developer), Student in Electrical Engineering (Aristotle University of Thessaloniki) (zalidis [at] gmail [dot] com)

  • Aris Thallas (developer), Student in Electrical Engineering (Aristotle University of Thessaloniki) (aris.thallas [at] gmail [dot] com)

RosJava & Android on Hydro

| No Comments | No TrackBacks
From Daniel Stonier via ros-users@

Hi all,

A quick javaland update. I know quite a few people have already dived in early with rosjava on hydro even though it hasn't actually bleeped on the ros news radar yet. 

This is finally that bleep to say that we're reasonably happy (and using it ourselves fairly actively) with the current state of the rosjava/android build environment for hydro and we'll endeavour to keep it stable 'as is' (apart from bugfixes) for the remainder of the hydro release.

So what is in the box?

RosJava

  • Partially Catkinized - each gradle super project is a catkin package
    • You can now do entire workspace builds and CI with one command
  • Ros Gradle Plugins: take alot of the repitition out of the build.gradle files
  • Debs - you no longer need to build every stack to build your own sources
  • A Maven Repo - you don't even need ros to access/build with the rosjava jars, just point to our maven repo on github.
  • Messages - each package now compiles into its own jar (no superblob)
Android

  • Android Studio/Gradle - uses the new adt build environment from google
    • IDE/Command Line/CI are now all compatible
  • AAR's : takes advantage of the new .aar's for android libraries
  • Partially Catkinized : can do entire workspace builds on these too.
    • with .aar's we can really scale up now
  • A Maven Repo : just point to this instead of having to build everything
    • don't need to build any sources to build your single application anymore!
Places to look for documentation are at:

And join us on the rosjava sig google group for feedback/questions/news!

Cheers,
Daniel

PS A big thank you to Damon Kohler for assisting us in getting rosjava in better shape for hydro and also to the users who endured alot of rapid changes and gave great feedback early in the upgrade.

PPS What's coming for igloo? Expect a true rosjava message generator...somewhat awkwardly compiling rosjava messages is very quickly reaching an annoying threshold of unbearably biblical proportions!

Announcing rosR and rosR_demos for groovy and hydro

| No Comments | No TrackBacks
From André Dietrich of Otto-von-Guericke-Universität Magdeburg
Fakultät für Informatik on ros-users@
rosR.png

The aim of this contribution is to connect two previously separated worlds: robotic application development with the Robot Operating System (ROS) and statistical programming with R. This fruitful combination becomes apparent especially in the analysis and visualization of sensory data. We therefore introduce a new language extension for ROS that allows to implement nodes in pure R. All relevant aspects are described in a step-by-step development of a common sensor data transformation node. This includes the reception of raw sensory data via the ROS network, message interpretation, bag-file analysis, transformation and visualization, as well as the transmission of newly generated messages back into the ROS network.

See also: http://journal.r-project.org/archive/2013-2/dietrich-zug-kaiser.pdf
rosR: http://wiki.ros.org/rosR
rosR_demos: http://wiki.ros.org/rosR_demos
or: http://eos.cs.ovgu.de/dietrich/

From Isaac Saito via ros-users@

we're happy to announce densowave, a ROS/MoveIt! interface for

industrial manipulators from Denso Wave Inc.

http://wiki.ros.org/densowave

Key factors:

- Currently works with VS-060, vertical multi-joint robot from Denso Wave.
- ROS communicates using UDP-based standardized protocol (ORiN) to
the embedded controller computer that has been achieving
industry-proven reliability. It also has mechanism to detect faulty
commands. That said as a whole the system maintains the same level of
safeness with their commercial product setting.
- However ROS interface is still experimental and feedback is highly
appreciated. Please try out manipulation in RViz without the real
robot.
- Work done by U-Tokyo. Maintenance by Tokyo Opensource Robotics
Kyokai Association

Lastly, credit goes to Denso Wave who provides the robot's model to
the opensource community.

Kei Okada, Ryohei Ueda

New Repository: kth-ros-pkg

| No Comments | No TrackBacks
Francisco Viña from KTH Sweden announced on ros-users@

The Royal Institute of Technology (KTH, Sweden) is proud to announce the release of kth-ros-pkg. Some of our packages include:

  • kdl_acc_solver: KDL solver for calculating cartesian accelerations from joint positions, velocities and accelerations.
  • kdl_wrapper: C++ wrapper for easily getting KDL kinematic chains and using KDL kinematic solvers with robots defined in ROS through URDF in the parameter server.
Future packages will include :
  • door_opening_control: adaptive controllers for simultaneous control and estimation of kinematic parameters of sliding and revolute doors.

As well as adaptive control/kinematics estimation for tool calibration, joint human-robot manipulation of objects, etc.

Our github repo:
  https://github.com/kth-ros-pkg

New packages: REEM-C simulation packages

| No Comments | No TrackBacks
From Paul Mathieu on ros-users@

Following the release of our latest biped robot, REEM-C, PAL Robotics is proud to present a set of simulation packages designed to offer a feature-rich, free solution to use REEM-C in a Gazebo simulation. Owners of a real robot will also have access to a bipedal walking controller as well as a tuned ROS navigation stack, which allows for autonomous biped navigation in simulation and on a real REEM-C.

Have a look at the ROS wiki page about REEM-C: 


and the REEM-C simulation tutorials:

http://wiki.ros.org/Robots/REEM-C/Tutorials
Do not hesitate to contact us at info@pal-robotics.com or to check out our website http://www.pal-robotics.com to learn more about our products, or at business@pal-robotics.com for commercial conditions and availability.

New Package: screengrab_ros

| No Comments | No TrackBacks
From Lucas Walter on ros-users@

This is a ros node for defining a region of interest on the screen with x, y, width, & height and publishing it just like a camera feed.  It could be most useful for capturing from a camera or any application that displays in Linux but has no ROS support, or recording from the screen to get all the user mouse movements and window placement for later export into a regular video.  

It is somewhat redundant with software that can capture from the screen into virtual webcam devices that are then trivial to publish with ros, though they would lack the ability to be controlled through ros parameters.  https://code.google.com/p/webcamstudio worked well for virtual webcams but I haven't tried it in a couple of years.  

It probably only works correctly with a X windows setup similar to what I have on Ubuntu 12.04, there is no attempt to convert the image out of XGetImage for special cases.


If the screen spans multiple monitors and the roi crosses the window boundary and is partially in and out of the display area it might crash, I haven't tried that.  I'll add publishing of the max width and height next.

It's written for catkin and Hydro. 

Any feedback and inclusion into the package list would be welcome-

New Package: argos3d_p100 package for groovy and hydro

| No Comments | No TrackBacks
From Simon Vogl via @ros-users

We are happy to announce the argos3d-p100 ros package:

https://github.com/voxel-dot-at/argos3d_p100_ros_pkg

This package provides support for the  Bluetechnix Argos3D P100 ToF camera
based on the PMDSKD library that is provided with the camera, along with
a detailed installation how-to and a visualization example based on rviz.


Any comment/suggestions are welcome

Node for using MFD and colored LEDs for Saitek X52 Pro

| No Comments | No TrackBacks
From Christian Holl via @ros-users

Hi all,

If anybody wants to use ROS with the Saitek X52 Pro and it's multifunction display, here is my code for doing that:

https://github.com/cyborg-x1/x52_joyext

The node is based upon the x52_pro_lib (credits to the programmer) which supports accessing every extended functionality of the joystick.

With the node you are able to set the text, the time field, the color for each button LED which supports color change, setting the back-light brightness, and printing text at any position of the display.

Inside the package there is also a node which can use any standard message basic type (bool, int, double or joy axis) as input to set the color of a specific button. Everything for this node is configured inside the launch file. There is a special syntax for which color is displayed at a specific value. A example of it, using the joy topic can be found inside the launch directory.
The example uses the wheel around button E to change the color of button A and B.

One of the buttons is green when the wheel is centered, while the other one is red. If the wheel is maxed in any direction, it's the other way round. In between, both buttons are yellow.


What's missing, but should be there:

-A node like the one for the colored buttons, but for printing the value as text on a specified position of the MFD.

-A time node which gets the system time and updates the time value on the MFD.

-awesome detailed documentation ( uh, sorry ;-) )


What would be really cool: 
A generic MFD Menu controlled by the selector wheels near the display, should be possible, if you are funny ;-)



Have fun!

PR2 Surrogate

| No Comments | No TrackBacks

Crossposted from osrfoundation.org

Recently, David Gossow at Willow Garage integrated the Oculus Rift virtual reality headset into RViz and, based on that, created a package for the PR2 robot called PR2 Surrogate. It lets you teleoperate a PR2 using the Oculus Rift and the Razer Hydra game controllers. We've been working closely with him to make this publicly available and are happy to announce its release into ROS Groovy Galapagos and Hydro Medusa.

The Oculus Rift is a virtual reality headset that gives you a fully immersive 3D experience by combining an extremely wide field of view and low latency head tracking. It is scheduled to be commercially available in 2014, but a developer kit can already be obtained. The Razer Hydra game controllers consist of two paddles you hold in your hands that precisely track their position and orientation in space. In addition, the controllers have the standard joysticks and buttons you find on a gamepad.

Binary packages for Ubuntu 13.04 armhf

| No Comments | No TrackBacks
From Austin Hendrix on ros-users@

I've been doing builds of ROS for ARM now, and I'm pleased to announce a more complete build of ROS Hydro for Ubuntu 13.04 armhf.

This includes builds of the core ROS tools, along with OpenCV, PCL and Navigation.

Install instructions are here: http://www.ros.org/wiki/hydro/Installation/UbuntuARM ; I've tested and confirmed that they work on my BeagleBone Black

Enjoy!
-Austin

ROS Node for JACO

| No Comments | No TrackBacks
JACO_ROSDriver.jpg

Clearpath Robotics and Kinova Robotics have just released the first ever ROS package for the JACO Robot Arm, with assistance from Worcester Polytechnic Institue's NASA Sample Return team. The package exposes all of the functionality of the arm to ROS, so feedback from the arm is available to be published to topics inside of ROS.

Up until now, JACO Robot Arm has mainly been used as an assistive device, rather than a manipulator for research and development initiatives. However, with Clearpath's new partnership with Kinova, the JACO Robot Arm is finding new territory in research applications including aerospace and mining.

Previously, the arm could only be controlled manually or through a separate computer running Windows. Now the ROS driver, which is designed exclusively for the JACO Robot Arm, integrates the hardware and software into a single system, creating an easy-to-use and time-efficient process. For those who purchase the arm from Clearpath Robotics, it will come fully-loaded with a launch file (included in the driver), which will initialize communications with the arm and prepare it to accept commands.

The JACO Robot Arm is unique for ROS users because it is well priced and it's delivered as a complete, all-in-one package (so, no more messing around with separate hardware and software systems - customers get both, right out of the box!). Not to mention, it is one of the best looking manipulators on the market.

JACO Robot Arm is a commercial-quality, accessible robot arm that is now available to ROS users. To download the first ROS interface that works with JACO Robot Arm, go to: http://www.ros.org/wiki/jaco

New BRIDE release 0.2.0

| No Comments | No TrackBacks
From Alex Bubeck of Fraunhofer IPA via ros-users@

Hi ros-users and bride-users,

I would like to announce the new release of BRIDE for ROS.

In addition to multiple small fixes these are the new features of the 0.2.0 release:

* Graphical creation of System models: Components can now be added graphically to the system model. No xml hacking any more!

* Coordinator development: You can develop state machines in BRIDE now, so called Coordinator Components. They make use of the Capability Components in you system by triggering actionservers or serviceclients. The Coordinator models are code generated into SMACH components and appear as regular components in the system diagram.

* Action support in code generation: ActionServers are now auto-generated. Only the execution_callback has to be implemented in the user code in the corresponding user_code section.

* Standalone compiler: In the bride_compiler package there is a standalone compiler to use the code generation without Eclipse. Code generation can also be triggered by running "make regen" in the terminal for updating after changes in the model.

As the templates are in the separate bride_templates package, it is now easier to recommend changes in the templates and improve them in smaller iterations.

As usual the installation instructions are on the http://www.ros.org/wiki/bride/ wiki page and the updated tutorials are athttp://www.ros.org/wiki/bride/Tutorials/. The binary releases are currently in the build pipeline and should be available soon.

Feel free to give feedback directly, by mailing-list or post bugs and feature requests at https://github.com/ipa320/bride/issues.
From John Schulman at UC Berkley via ros-users@

Hi all,

I'm announcing the release of trajopt, a library for trajectory optimization. More specifically, trajopt is designed for planning collision-free paths for robot arms and mobile manipulators.

Trajopt is built on top of OpenRAVE. You can define your optimization problem in Python or C++ in JSON format and then call the optimizer.

Some highlights of trajopt:
- It's fast. It solves arm planning problems in simple environments in about 150ms (converging to a locally optimal solution)
- It reliably finds collision-free paths, especially with multiple initializations. FWIW it solves 100% (204/204) of problems in our benchmark collection
- It performs well on very high-dof problems, e.g. jointly optimizing over the arms and base of a mobile manipulator, or optimizing over all of the joints of a humanoid robot.
- A wide variety different costs and constraints are implemented. (Pose constraints, velocity constraints, static stability, and more.) You can write your own cost and constraint functions in python or C++.

The technical details are described in a paper, which is linked to on the front page of the documentation.

This code is at an early stage of development. I'd be grateful to hear about any problems, questions, or comments.

New Package: arl_ardrone_examples

| No Comments | No TrackBacks
From Parker Conroy via ROS Users

Hi ros-users,

I'd like to add this package (https://github.com/parcon/arl_ardrone_examples) to the known software list. 

 This basic release is designed to help roboticists and hobbyist new to the AR drone quickly be able to command the robot. A variety of simple nodes are included to show users how to takeoff, land, reset, and fly the AR drone. Nodes for the purposes of helping users use the cameras will be included soon in a update. 

~Thank you
Parker Conroy
From Tingfan via ROS Users

Hi all,

Here's a prototype of matlab_bridge built on top of rosjava.
Thanks to automatic code generation in rosjava and native java support
in matlab.
I don't have to deal with dynamic linking problems as typical
mex-function approach would encounter.
The result is a  cross-platform ros_matlab_bridge.

http://code.google.com/p/mplab-ros-pkg/wiki/java_matlab_bridge

The current implementation depends on an old version of rosjava (Jan 2012).
I was wondering if it worth the effort to rewrite the code to catch up
with the new rosjava APIs.
Comments welcome.

Thank you very much.
-Tingfan
From Patrick Goebel via ROS Users

Hello ROS users,

I have released a new package for the Element microcontroller made by cmRobot.  Details and source code can be found at:

Documentation: http://www.ros.org/wiki/element
Source: svn http://pi-robot-ros-pkg.googlecode.com/svn/trunk/element

(The source link should appear on the Wiki page on the next indexing.)

Features of the package include:
  • Support for a wide variety of commonly used sensors including sonar (Ping, MaxEZ1), infrared (Sharp GP2D12), temperature, current, and voltage (Phidgets), speech (Devantech SP03), compass (Devantech CMPS03), as well as generic digital and analog sensors.
  • Onboard PID controller and dual H-Bridges for driving a differential drive robot
  • One bipolar stepper motor
  • Support for controlling up to six hobby servos
  • Sensors are polled using a multi-threaded queue and can each run at their own rate
  • Connects to a PC or SBC using USB, XBee or TTL

--patrick

Updated repository: universal_robot

| No Comments | No TrackBacks
From Shaun Edwards on ros-users@


All,

 

With the permission of the original developers, ROS-Industrial has officially taken ownership of the universal robot stack/metapackage.  The new repo can be found here: https://github.com/ros-industrial/universal_robot

 

We will be doing a groovy release from the existing driver (basically the same as fuerte).  I plan to merge changes we have made at SwRI into the master/trunk.  Some of these are improvements to the driver itself, as well as some arm navigation work. 

 

As with our other packages, the trunk/master will be unstable development for groovy and the branch(released) version will be stable.

 

As always we are interested in submission and bug fixes from the community.  If anybody is interested in helping develop this stack further, please let me know.

 

Thank you,

 

Shaun

New Package: ROS Arduino Bridge

| No Comments | No TrackBacks
From Patrick Goebel via ROS Users

Hello ROS Fans,

I would like to announce a new stack for controlling an Arduino-based robot with ROS.  The official documentation can be found at:

http://www.ros.org/wiki/ros_arduino_bridge

The stack includes a base controller for a differential drive robot as well as support for reading sensors and controlling PWM servos.  The code does *not* depend on rosserial.

The packages have been tested against the ROS navigation stack (Electric) using a Pololu motor controller and Robogaia encoder shield.

This stack comes out of a discussion amongst members of the Home Brew Robotics Club (HBRC) for extending ROS support to hobby-level robots using inexpensive and easily obtained hardware.  The code was inspired by Michael Ferguson's ArbotiX drivers and borrows heavily from it.  (Thanks Fergs!)

--patrick
From Jos Elfring of Eindhoven University of Technology on ros-users@

Dear all,


Eindhoven University of Technology is proud to announce the release of the following stacks:

  - http://ros.org/wiki/amigo_simulator  - Components needed to simulate our AMIGO-robot in Gazebo
  - http://ros.org/wiki/tulip_simulator  - Components needed to simulate our TUlip-robot in Gazebo
  - http://ros.org/wiki/wire  - Toolkit for constructing a probabilistic world model that keeps track of object identities and properties over time

The wiki pages contain extensive documentation and numerous tutorials which will get you up and running in no time.

Furthermore, information about our robots AMIGO and TUlip - both of which participate in the RoboCup Tournaments - can be found here:

  - http://www.ros.org/wiki/Robots/AMIGO
  - http://www.ros.org/wiki/Robots/TUlip

Of course, we are more than happy to receive any feedback and answer any questions regarding the use of above-mentioned stacks!

Cheers!

Package Release: BRIDE release 0.1.2

| No Comments | No TrackBacks
From Alexander Bubeck of Fraunhofer IPA via ROS Users

Dear ROS-community,

 

I want to announce the 0.1.2 version of BRIDE that is now in a state that it can be used by ros-developers and I'm looking forward for feedback.

 

BRIDE is a model driven engineering tool chain based on Eclipse. It is developed as part of the BRICS project.

 

In manually created ROS components, ROS-specific code parts are usually mixed with the framework-independent algorithmic core of a component. In contrast, BRIDE allows for a clear separation of framework-independent and framework-specific code: Component interfaces and behaviors are modeled in an abstract representation. This representation can then be used to auto-generate source code for different middleware and programming language targets.

 

You can find more information on the installation as well as tutorials on the corresponding roswiki pages at http://ros.org/wiki/bride .

 

If you are interested in the BRICS concepts, the BRICS project or want to try out the OROCOS targeting of BRIDE please visit http://www.best-of-robotics.org .

 

Best regards,

 

Alexander Bubeck

New Package: ccny_rgbd: Minecraft edition

| No Comments | No TrackBacks
From Ivan Dryanovski of CCNY on ros-users@

Hi everyone,

We recently added another tool to our collection: 2schematic. It lets
users conver colored PointCloud (.pcd) or Octomap (.ot) files to
Minecraft schematic files. In conjuntion with ccny_rgbd,or any other
3D mapping application, this enables you to import indoor enviroments
into Minecraft (and destroy them).

Here's a video:




The code is on github:

https://github.com/idryanov/2schematic

It supports several different coloring models, and simple color
filter. The output is .schematic files, which you can then view or
edit inside mcedit, and export to Minecraft worlds.

I hope you enjoy our contribution!

Cheers,

Ivan

PS. I saw yesterday that Jon Stephan independently released a very
similar package. Great work - I look forward to checking it out! The
3D mapping research appears to be converging to its inevitable
conclusion...

New Package: minecraft-ros

| No Comments | No TrackBacks
From Jon Stefan on ros-users@

I am pleased to announce a new package: minecraft-ros.  It consists of 2 utilities to convert ROS maps and octomaps to Minecraft worlds.

The files and instructions can be found here: https://code.google.com/p/minecraft-ros/

The map_2d_2_minecraft.py script will convert a 2D .pgm map file into a minecraft world. The world file is copied into the ~/.minecraft/saves folder.

 -----> 


The octomap_2_minecraft script, along with the octomap_dump program will convert an octomap into a Minecraft map.

 --> 


Both these scripts rely on pymclevel.  A chunk of the octomap conversion code is borrowed from Nathan Viniconis' Kinect conversion code.

Enjoy!

New Package: libfreenect based Kinect driver

| No Comments | No TrackBacks
From Piyush via ROS Users

Hey folks,

After some initial discussion on the ROS mailing list [1], a
libfreenect (OpenKinect) based Kinect driver for ROS has been released
for Fuerte (freenect_stack) [2]. A system install for the stack is now
available. The stack is designed to have the same API as the OpenNI
one, and there is an easy migration guide [3]

The stack has the the following known limitations:
1) It only supports the Kinect [4]
2) It does not support USB 3.0 [5]. In contrast, OpenNI with a bit of
work can be made to work with USB 3.0 [6][7].

I'll continue to maintain the stack. My first priority will be to
include USB 3.0 compatibility, which is something I will work on as
time permits. Almost all high-end laptops these days only have USB 3.0
ports.

If you are facing problems with the stack, please report them on the
corresponding bug report page [8].

[1] http://comments.gmane.org/gmane.science.robotics.ros.user/16856
[2] http://www.ros.org/wiki/freenect_stack
[3] http://www.ros.org/wiki/freenect_camera?distro=fuerte#Migration_guide
[4] http://www.ros.org/wiki/freenect_camera?distro=fuerte#Other_OpenNI_devices
[5] https://github.com/piyushk/freenect_stack/issues/5
[6] http://answers.ros.org/question/9179/kinect-and-usb-30/
[7] http://answers.ros.org/question/33622/openni_launch-not-working-in-fuerte-ubuntu-precise-1204/
[8] https://github.com/piyushk/freenect_stack/issues

Thanks,
Piyush

New Package: RCommander

| No Comments | No TrackBacks
From Hai Nguyen of the Healthcare Robotics Lab @ Georgia Tech on ROS Users

Hello ROS community,

I would like to announce the result of our work here at Georgia Tech
in collaboration with Willow.  This is the first release of RCommander
(version 0.5), a visual framework for easy construction of SMACH state
machines allowing users to interactively construct, tweak, execute,
load and save state machines.  There are two stacks.  The
rcommander_pr2 stack contains an implementation with basic states for
controlling the PR2 robot.  rcommander_core contains the framework's
essentials allowing the construction of custom RCommander interfaces
for robots other than the PR2.  The wiki doc links below also has a
few tutorials for getting started with either rcommander_pr2 or
rcommander_core.

Wiki-docs: http://www.ros.org/wiki/rcommander_core
Repository: https://code.google.com/p/gt-ros-pkg.rcommander-core/
Ros-install
 for indexer:
http://gt-ros-pkg.googlecode.com/git/rcommander/rcommander_core.rosinstall

Wiki-docs: http://www.ros.org/wiki/rcommander_pr2
Repository: https://code.google.com/p/gt-ros-pkg.rcommander-pr2/
Ros-install
 for indexer:
http://gt-ros-pkg.googlecode.com/git/rcommander/rcommander_pr2.rosinstall

Just some notes: I've tested this on ROS Electric and have not done
much with Fuerte yet but it will be supported soon.  The wiki docs
point to an older Mercurial repository, it should point to the newer
git repository when the Ros indexer gets updated.

Announcing MORSE 1.0

| No Comments | No TrackBacks
Dear ROS community,

After 4 years of worldwide development by over 20 people in 10 different labs, we are extremely excited to announce the immediate availability of MORSE-1.0, a novel versatile simulator for academic robotics, with full ROS support.

Amongst the prominent features:

  * Versatile 3D simulator for mobile robots simulation (single or multi robots),

  * Realistic ('modern' OpenGL) and dynamic environments (interaction with other agents like humans or objects),

  * Based on well known and widely adopted open source projects (Blender for real-time 3D rendering, Bullet for physics simulation, dedicated robotic middlewares for communications),

  * Command-line oriented (with optional scene editing in Blender), entirely scriptable in Python,

  * Adaptable to various level of simulation abstraction (e.g. simulate cameras as video-streams, depth-streams or semantic maps depending on your needs),

  * > 20 classes of sensors (including depth sensors, cameras, IMU, laser scanners...), > 15 classes of actuators (including kinematic chains, quadrirotor control, force control...) are available. Detailed documentation explain how to add new ones (in C or Python),

  * Currently supports ROS, YARP, MOOS and Pocolibs + direct socket interface

  * Extensive documentation, available here:
        http://www.openrobots.org/morse/doc/stable/morse.html

And as a collaborative academic project, the source code is available under a permissive BSD license. Grab your copy fromhttp://www.github.com/laas/morse !


Last but not least, Michael and Pierrick will be present at ROSCon in May to present the project. Feel free to pop-up to meet the team!
From Mani Monajjemi on ROS Users

Hi Everyone,

`ardrone_autonomy` is a new ROS driver based on newly released Parrot
AR-Drone SDK 2.0 which supports both AR-Drone 1 & 2 quadrocopters.
This driver is a fork (& update) of `ardrone_brown` driver with lots
of performance improvements and new features.

The code and documentation can be accessed from here:
https://github.com/AutonomyLab/ardrone_autonomy

With Best Regards,
Mani Monajjemi

New Package optris_drivers

| No Comments | No TrackBacks
From Stefan May on ROS Users

Dear ROS users,


a new ROS wrapper for Optris thermal imagers is available at:
http://ros.org/wiki/optris_drivers

It works on 32-bit platforms and is tested on Ubuntu 12.04 for the moment.

Feel free to give me feedback, if you share one of those devices.

Best regards,

   Stefan May

New Package: V-REP

| No Comments | No TrackBacks
From Marc Freese of Coppelia Robotics on ROS Users

Dear ROS community,

We are happy to announce that the V-REP robot simulator, that includes an extensive and powerful ROS interface, is now open source. As of now, it is also fully free and without any limitation for students, teachers, professors, schools and universities. No registration required. Moreover, V-REP is now available for customization and sub-licensing.

V-REP is the Swiss army knife among robot simulators: you won't find a simulator with more features and functions, or a more elaborate API:

- Cross-platform: Windows, Mac OSX and Linux (32 & 64 bit)
- Open source: full source code downloadable and compilable. Precompiled binaries also available for each platform
- 6 programming approaches: embedded scripts, plugins, add-ons, ROS nodes, remote API clients, or custom solutions
- 6 programming languages: C/C++, Python, Java, Lua, Matlab, and Urbi
- API: more than 400 different functions
- ROS: >100 services, >30 publisher types, >25 subscriber types, extendable
- Importers/exporters: URDF, COLLADA, DXF, OBJ, 3DS, STL
- 2 Physics engines: ODE and Bullet
- Kinematic solver: IK and FK for ANY mechanism, can also be embedded on your robot
- Interference detection: calculations between ANY meshes. Very fast
- Minimum distance calculation: calculations between ANY meshes. Very fast
- Path planning: holonomic in 2-6 dimensions and non-holonomic for car-like vehicles
- Vision sensors: includes built-in image processing, fully extendable
- Proximity sensors: very realistic and fast (minimum distance within a detection volume)
- User interfaces: built-in, fully customizable (editor included)
- Robot motion library: fully integrated Reflexxes Motion Library type 4
- Data recording and visualisation: time graphs, X/Y graphs or 3D curves
- Shape edit modes: includes a semi-automatic primitive shape extraction method
- Dynamic particles: simulation of water- or air-jets
- Model browser: includes drag-and-drop functionality, also during simulation
- Other: Multi-level undo/redo, movie recorder, convex decomposition, simulation of paint, exhaustive documentation, etc.

For more information, please visit http://www.coppeliarobotics.com or have a look at following demo video:




Best regards,

Marc

New Package: usb_cam on groovy

| No Comments | No TrackBacks
From Adrian Cooke on ROS Users

Hey all,

I ported usb_cam from the bosch_drivers package to work on Groovy. The port can be found in my [roshome repository](https://github.com/agcooke/roshome/tree/master/src/usb_cam)

The patch is attached.

Are there other usb cam drivers available on groovy by default?

I also have tried twice to add a question to answers.ros.org and it does not work. I click the 'Ask your question' button and then just get shown the same page...

New Package: matlab_rosbag

| No Comments | No TrackBacks
From Ben Charrow on ROS Users

Hi all.

I wanted to announce the initial release of matlab_rosbag, a small library which lets you read bags in matlab.  This library is intended to replace the one-off python / C++ programs / shell scripts that you have to write anytime you want to analyze or play with data inside of matlab.   A few selling points:

* ROS doesn't need to be installed on a system to use the library, you just need to download a mex function and a matlab class file.

* The library wraps the C++ rosbag API and so it supports bag file format 1.2 and later, including things like reading compressed messages.

* Message instances are converted to matlab structs using the message definitions contained within the bag.

Currently, there are several things that matlab_rosbag doesn't do such as writing to a bag, filtering messages by connection id, and reading all metadata.  None of these would be particularly hard to add, I just haven't had the time yet.

You can get the source code and pre-compiled binary releases for OS X and Linux from github:
https://github.com/bcharrow/matlab_rosbag

Cheers,
Ben

New Package: ccny_rgbd

| No Comments | No TrackBacks
From Ivan Dryanovski on ROS Users

Hello everyone,


We are pleased to release ccny_rgbd, a collection of tools for fast
visual odometry and 3D mapping with RGB-D cameras. Highlight of the
software include:

 * RGB-D image processing pipeline
 * Fast, lightweight visual odometry, operating at 30+ Hz on VGA data
(single thread, no GPU)
 * 3D map server which supports saving/loading and graph-based optimization

The documentation is available at the ROS wiki:

 *  https://github.com/ccny-ros-pkg/ccny_rgbd_tools

This video shows an overview of the functionality:

 * 

The code is available for download on github, and currently supports
ROS fuerte and groovy:

 * http://www.ros.org/wiki/ccny_rgbd

The software was developed in conjunction with our upcoming ICRA2013
publication [1].

Cheers,

Ivan

[1]  Ivan Dryanovski, Roberto G. Valenti, Jizhong Xiao. Fast Visual
Odometry and Mapping from RGB-D Data. 2013 International Conference on
Robotics and Automation (ICRA2013).

A quick update that I added Colored octomap export directly from the
keyframe_mapper. Info and some images available:

http://www.ros.org/wiki/ccny_rgbd/keyframe_mapper

New Package: differential_drive

| No Comments | No TrackBacks
From Jon Stephan on ROS Users

Hi ROS users,

I'd like to announce the differential_drive package.  This provides some of the low level nodes needed to interface a differential drive robot to the navigation stack.  I think this will be especially useful for beginning hobby roboticists like myself, and it provides the following nodes:
  • diff_tf - Provides the base_link transform.
  • pid_velocity - A basic PID controller with a velocity target.
  • twist_to_motors - Translates a twist to two motor velocity targets
  • virtual_joystick - A small GUI to control the robot.

The ROS wiki page (with links to tutorials) can be found here: http://www.ros.org/wiki/differential_drive

My hacked K'nex robot is an example using this package: http://code.google.com/p/knex-ros/wiki/ProjectOverview

This is my first released package, I hope someone out there finds it useful.

Thanks,
-Jon

From Piyush at University of Texas

We've just completed a ROSJava based wrapper for the April Fiducial Marker System from the APRIL lab at UMich. The package should be (in principle) compatible with any package that uses the ROS Wrapper around ARToolKit. Hopefully some of you might find it useful.

Stack page: http://www.ros.org/wiki/april

Bug reports: http://code.google.com/p/utexas-ros-pkg/issues/list

Announcement by Filip Muellers to ros-users

Hi all,

I have implemented nice new features for the rxDeveloper 1.3b and need some feedback. I would like to improve this new features.

New features:

  • automatic generation of source code from specifications - the specfile-editor allows you to create python and c++ templates. for c++ files the CMakeLists.txt can be modified automatically, too.
  • rosdep install and rosmake can be executed graphically for a selected package (component creation tab)
  • specfiles can be created by fetching information from running ROS processes (component creation tab)

The rxDeveloper runs on Diamondback, Electric and Fuerte beta.

For more information, sources and tutorials please visit rxdeveloper-ros-pkg/.

Regards,
Filip Muellers

Announcement by Bob Mottram to ros-users

I've written a new stereo camera driver which is intended for use with V4L2 compatible cameras, such as stereo webcams like the Minoru.

https://launchpad.net/stereocamera-v4l2-ros-pkg

This doesn't do any stereo correspondence, but broadcasts images and allows calibration parameters to be stored. Various parameters can be set within the launch file.

Announcement from Arnaud Ramey (University Carlos III of Madrid)

Hello ROS users!

I am happy to announce to release of a image_transport plugin for float images, mainly aimed at broadcasting compressed Kinect depth images. More details follow. I hope this might be useful for other users and would be happy to hear feedback! (that would justify the hours of work to get to that result :) ).

For more information, including installation instructions, please see the ros-users post.

Link: Announcing compressed_rounded_image_transport

Announcing ROS Android Sensors Driver

| No Comments | No TrackBacks

ros_android_logo.pngAnnouncement by Chad Rockey (maintainer of laser_drivers to ROS users

Hi ROS Community,

I've been working on a driver that connects the sensors in Android devices to the ROS environment. At this time, it only publishes sensor_msgs/NavSatFix messages, but I will soon introduce sensor_msgs/Imu and sensor_msgs/Image to publish data from accelerometers, gyroscopes, magnetometers, and front/rear cameras.

To get more information and to install, please see the following:

To file bugs, request features, view source, or contribute UI, translation, or other improvements, please see the Google Code project:

http://code.google.com/p/android-sensors-driver/

I hope everyone finds this useful and I look forward to hearing your feedback and seeing cool uses for Android devices in robotics.

Thanks,
- Chad Rockey

AndroidSensorsDriver.jpg

New face recognition package

| No Comments | No TrackBacks

Announcement by Pouyan Ziafati of University of Luxembourg and Utrecht University to ros-users

Dear All,

I am happy to announce a new ROS package (face_recognition) for face recognition in video stream. The package provides an actionlib interface for performing different face recognition functionalities such as adding training images directly from the video stream, re-training (updating the database to include new training images), recognizing faces in the video stream, etc. (For more info see the README file)

The package is accessible from the git repository:

git://github.com/procrob/procrob_functional.git

Best regards,
Pouyan PhD Candidate,
Utrecht University & University of Luxembourg

rviz Qt prototype available for testing

| No Comments | No TrackBacks

Announcement from RViz SIG Coordinator Dave Hershberger to ros-users

Rviz is moving from the wxWidgets library to the Qt library as of the ROS Fuerte release. This transition to Qt will improve RViz compatibility on more platforms and better integrate with future GUI tools. For more information on the motivation for these changes, please refer to the RViz and ROS GUI SIGs:

This is a fairly big change to the code. The internal plugin API is changed and the Python API will be different (when I get it implemented). The GUI appears and behaves mostly the same as the wx version, most of the differences coming from the differences in style between wx and Qt and not from intentional changes. For a list of major changes, see visualization_experimental/ChangeList.

An early but fairly complete, version of the new code is available in the temporary visualization_experimental stack, which works with ROS Electric. It is available in the ros-electric-visualization-experimental debian package. This version does not have python support implemented yet, but all the built-in display types, tools, and view controllers work. I encourage rviz users to try the new version (called rviz_qt for now) before Fuerte so I can fix bugs and make the transition as smooth as possible.

Please report bugs and feature requests to the same place as usual for rviz, but with "qt" in the keywords field, like so: report a bug, request a feature, list existing tickets.

Thanks,
Dave

Oier Mees on ros-users writes

Hello,

During a summer internship in Tekniker Research Center, I have developed a telepresence and teleoperation application which I have just published. You can teleoperate the robot, in my case a segway rmp 200 with a kinect on top, with the joystick or the gyro of the PS3 controller, besides the usual buttons of the UI. Also, a videoconference is performed with the client PC and if the user clicks inside the rectangle where the kinects video stream is shown, the robot will attempt to reach that destination. For instance, as you can see in the demo video, if the user clicks on a person who is in the middle of a corridor, the robot will calculate the distance thanks to the kinects depth sensor and try to go to destination by using ROS navigation stack.

A coworker is now going to continue developing and maintaining it, so we would be grateful for any kind of feedback or contribution.

You can grab the code at github and see a demo video on YouTube.

Regards,
Oier Mees

A New Framework for 3D User Interfaces in ROS

| No Comments | No TrackBacks

Crossposted from willowgarage.com

David Gossow, a recent graduate of the University of Koblenz-Landau and new doctoral student at the Technical University of Munich, visited Willow Garage this spring. David created a new general framework allowing ROS developers to create graphical 3D interfaces to their robot applications, and applied it to building new tools for Human-in-the-Loop robotic manipulation.

David's new framework, called Interactive Markers, allows a ROS application to receive input from a human operator through a compatible client software. It separates the application from the tool used for visualization and user interaction, much like a web application runs independently of the web browser. Interactive Markers offer a wide variety of display and interaction modes, enabling a broad range of new applications within ROS. David also implemented a reference front end for Interactive Markers in rviz, effectively transforming it from a robot visualization tool into an interaction engine. This new front end is a major new feature in the recent ROS Electric release.

David used the Interactive Markers framework to develop new tools for Human-in-the-Loop robotic manipulation. Assistance from a human operator allows a robot to perform complex manipulation tasks even in difficult, unstructured environments. The goal in this framework is to minimize the cognitive effort required on the human side. This can be achieved by taking advantage of the sub-tasks that the robot can perform without assistance. For example, if the operator assists in object recognition, subsequent operations such as grasping and placing can be performed autonomously. When needed, the operator can also get involved at lower levels of task execution, such as specifying grasp poses or even directly operating the robot's gripper. A complete set of tools, based on Interactive Markers, allows an operator to perform all these tasks through the rviz interface.

For more details, see David's presentation below, or check out the interactive_markers and pr2_interactive_manipulation packages on ROS.org.

PR2 rubiks solver stack now available

| No Comments | No TrackBacks

Announcement from Lorenzo Riano of University of Ulster (uuisrc-ros-pkg) to pr2-users:

We have added a new stack with the packages to run the Rubik's cube solver on the PR2. We will add the documentation shortly. In the meantime you can find it at: https://github.com/uu-isrc-robotics/pr2rubikssolver

Please report any bug-issue-feature request to uu.isrc.robotics@gmail.com

Announcement from Patrick Goebel of Pi Robot to ros-users

I have put together a little ROS package for doing face tracking using the idea described in the first paragraph below. Note that I have not yet added any feature learning as used in the TLD algorithm. Interested parties can check it out on the ROS wiki.

You can also use this package for tracking arbitrary patches of a video by setting the auto_face_tracking parameter to False and selecting the desired region with the mouse. This does not work as well as face tracking since face tracking uses the Haar detector to re-acquire the face if the number of tracked features falls below a prescribed minimum.

Let me know if you find any bugs or can think of improvements.

--patrick

http://www.pirobot.org

Penn Teaches PR2 How to Read

| No Comments | No TrackBacks



Menglong Zhu at Penn has given PR2 a fantastic new skill: the ability to read. Using the literate_pr2 software he wrote, PR2 can drive around and read aloud the signs that it sees. Whether it's writing on a whiteboard, nameplates on a door, or posters advertising events, the ability to recognize text in the real world is an important skill for robots.

Although performing OCR on text is not a new technology, performing it in the real world is much more difficult as text can be written anywhere and first you have to find it. Menglong's code is able to detect areas of text in a camera image and perform text recognition on them separately.

This is another great contribution to ROS from the GRASP Lab. For more information, please see the literate_pr2 page on the ROS wiki.

g2o is available as a package under the vslam stack. g2o is an open-source C++ framework for optimizing graph-based nonlinear error functions. g2o has been designed to be easily extensible to a wide range of problems and a new problem typically can be specified in a few lines of code. The current implementation provides solutions to several variants of SLAM and BA.

A wide range of problems in robotics as well as in computer-vision involve the minimization of a non-linear error function that can be represented as a graph. Typical instances are simultaneous localization and mapping (SLAM) or bundle adjustment (BA). The overall goal in these problems is to find the configuration of parameters or state variables that maximally explain a set of measurements affected by Gaussian noise. g2o is an open-source C++ framework for such nonlinear least squares problems. g2o has been designed to be easily extensible to a wide range of problems and a new problem typically can be specified in a few lines of code. The current implementation provides solutions to several variants of SLAM and BA. g2o offers a performance comparable to implementations of state-of-the-art approaches for the specific problems.

Please see the vslam page for installation instructions.

Links:


io2011.pngYesterday at Google I/O, developers at Google and Willow Garage announced a new rosjava library that is the first pure-Java implementation of ROS. This new library was developed at Google with the goal of enabling advanced Android apps for robotics.

The library, tools, and hardware that come with Android devices are well-suited for robotics. Smartphones and tablets are sophisticated computation devices with useful sensors and great user-interaction capabilities. Android devices can also be extended with additional sensor and actuators thanks to the Open Accessory and Android @ Home APIs that were announced at Google I/O,

The new rosjava is currently in alpha release mode and is still under active development, so there will be changes to the API moving forward. For early adopters, there are Android tutorials to help you send and receive sensor data to a robot.

This announcement was part of a broader talk on Cloud Robotics, which was given by Ryan Hickman and Damon Kohler of Google, as well Ken Conley and Brian Gerkey of Willow Garage. This talk discusses the many possibilities of harnessing the cloud for robotics applications, from providing capabilities like object recognition and voice services, to reducing the cost of robotics hardware, to enabling the development of user interfaces in the cloud that connect to robots remotely. With the new rosjava library, ROS developers can now take advantage of the Android platform to connect more easily to cloud services.

WU-ros-pkg updates

| No Comments | No TrackBacks

polonius.png

Announcement from David Lu of Washington University in St. Louis to ros-users

Hey all,

Today I am pleased to announce the release of 2 new stacks and 1 new package as part of the Washington University repository.

First, the Polonius stack. Polonius is a robot control interface designed for running Wizard of Oz style experiments. This work was presented in poster form at HRI2011, and was also used to control a short robot theatre piece we put on last September, the video for which is being presented at the Robots and Art workshop at ICRA next week.

Second, the motion_capture stack. This contains our initial work at incorporating motion capture data into ROS, initially just using the c3d data format. Tools for analyzing this sort of data will be released in the future.

Third, a ROS Wiki documentation tool, roswiki_node. This package contains code to make documentation a bit easier by automatically generating the CS/NodeAPI code used on the wiki automatically using your source code.

Feedback is always welcome, either by emailing me or through our SourceForge project page.

Cheers, and Happy May.

-David Lu!!

New Release of RGBDSlam

| No Comments | No TrackBacks

Announcement from Felix Endres from the University of Freiburg to ros-users

Dear ROS Users,

we are happy to announce a new release of our entry to the ROS-3D contest.

There have been many changes, we would like to share with the community:

  • Improvments w.r.t accuracy and robustness of registration
  • Performance improvments w.r.t computation time
  • A more convenient user interface with internal 3D visualization
  • Many convenience features, e.g., saving to pcd/ply file, node deletion etc

It is available for download.

Quick Installation (see README in svn repo for more detail) for Ubuntu:

$ sudo apt-get install ros-diamondback-desktop-full ros-diamondback-perception-pcl-addons ros-diamondback-openni-kinect meshlab
$ svn co http://alufr-ros-pkg.googlecode.com/svn/trunk/freiburg_tools/hogman_minimal
$ svn co https://svn.openslam.org/data/svn/rgbdslam/trunk rgbdslam
$ roscd rgbdslam && rosmake --rosdep-install rgbdslam

Best regards,
Felix


inversekinematics_thumb.jpg
Here is some good news for all of the robot arms out there.

Announcement from Rosen Diankov of OpenRAVE to ros-users

Dear ROS Users,

OpenRAVE's ikfast feature has been seeing a lot of attention from
ros-users recently, so we hope that making this announcement on
ros-users will help get a picture of what's going on with ROS and
OpenRAVE:

We're pleased to announce the first public testing server of
analytical inverse kinematics files produced by ikfast:

http://www.openrave.org/testing/job/openrave/

The tests are run nightly and tagged with the current openrave and
ikfast versions. These results are then updated on the openrave
webpage here:

http://openrave.programmingvision.com/en/main/robots.html

The testing procedures are very thorough and are explained in detail here:

http://openrave.programmingvision.com/en/main/ikfast/index.html

Navigating the "robots" openrave page, you'll be able to see
statistics for all possible permutations of IK and download the
produced C++ files. For example the PR2 page has 70 different ik
solvers:

http://openrave.programmingvision.com/en/main/ikfast/pr2-beta-static.html#robot-pr2-beta-static

For each result, the "C++ Code" link gives the code and the "View"
link goes directly to the testing server page where the full testing
results are shown. If the IK failed to generate or gave a wrong
solution, the results will show a stack trace and the inputs that gave
the wrong solution.

For example, when generating 6D IK for the PR2 leftarm and setting the
l_shoulder_lift_joint as a free parameter, 0.1% of the time a wrong
solution will be given:

http://www.openrave.org/testing/job/openrave/lastSuccessfulBuild/testReport/%28root%29/pr2-beta-static__leftarm/_Transform6D_free__l_shoulder_lift_joint_16___/?

By clicking on the "wrong solution rate" link, a history of the value
will be shown that is tagged with the openrave revision:

http://openrave.org/testing/job/openrave/48/testReport/junit/%28root%29/pr2-beta-static__leftarm/_Transform6D_free__l_shoulder_lift_joint_16___/measurementPlots/wrong%20solutions/history/

In any case all these results are autogenerated from these robot repositories:

http://openrave.programmingvision.com/en/main/robots_overview.html#repositories

They are stored in the international standard COLLADA file format:

http://www.khronos.org/collada/

and OpenRAVE offers several robot-specific extensions to make the
robots "planning-ready":

http://openrave.programmingvision.com/wiki/index.php/Format:COLLADA

The collada_urdf package in the robot_model trunk should convert the
URDF files into COLLADA files that use these extensions.

Our hope is that eventually the database will contain all of the
world's robot arms. So, if you have any robot models that you want
included on this page please send an email to the openrave-users list!

In order to keep the openrave code and documentation more tightly
synchronized, we have unified all the OpenRAVE documentation resources
into one auto-generated  (from sources) homepage:

http://www.openrave.org

Some of the cool things to note are the examples/databases/interfaces gallery:

http://openrave.programmingvision.com/en/main/examples.html

http://openrave.programmingvision.com/en/main/databases.html

http://openrave.programmingvision.com/en/main/plugin_interfaces.html

Finally, just to assure everyone we're working on tighter integration
with ROS and have begun to offer some of the standard
planning/manipulation services through the orrosplanning package.

rosen diankov,


sensors.jpg
One of the new features you may have noticed on the ROS website are the Robots portal pages, which  They are designed to help you get a new ROS enabled robot up and running quickly.

But what do you do if you are trying to build your own robot?

Just head over to the new Sensors page, where there is a list of sensors supported by official ROS packages and many other sensors supported by the ROS community. The Sensors portal pages have detailed tutorials and information about different types of sensors, organized by category. Hopefully, the sensor portal pages can also become a resource for developers and inspire interoperability between similar sensors.

If your robots or sensors are not on the list, you can help improve the portals by adding your documented packages and tutorials.

Working together, we can manage exponential growth!

ROS, meet Arduino

| No Comments | No TrackBacks

Guest post from James Bowman of Willow Garage

We recently wanted to hook up an analog gyro (the Analog Devices ADXRS614) to ROS, and decided to use an Arduino to handle the conversion.

Arduino interfaces a gyro to ROS

The Arduino runs a tiny loop that reads the analog values from its six analog lines,and writes these values to the USB serial connection.  Meanwhile on the robot, a small ROS Python node listens to the USB reports, and publishes them as a ROS topic.

That's all there is to it: the whole thing takes under 40 lines of code.

This is probably the simplest possible way to use an Arduino with ROS.  There are quite a few more sophisticated projects:

  • pmad  - controls an Arduino's I/Os using ROS service calls
  • avr_bridge - automates generation of ROS-Arduino message transport
  • arduino - our own package: an Arduino as an Ethernet-connected ROS node

rospy on Android

| No Comments | No TrackBacks

Announcement from Prof. Dr. Matthias Kranz of TUM

The team of the Distributed Multimodal Information Processing Group of Technische Universität München (TUM) is pleased to announce that we ported rospy to run on Android-based mobile devices.

Python for Android on top of the Scripting Layer for Android (SL4A) serves as basis for our rospy project. We extended the scripting layer, added new support for ctypes and other requirements. Now, rospy, roslib and the std_msgs are working and running on a roscore, directly on your mobile phone. To configure a roscore on a standard computer to cooperate with the roscore on the Android, you simple scan a QR code on the computer's screen to autoconfigure the smartphone. Basic support for OpenCV and the image topics is also included. You are welcome to extend the current state of our work.

You will need a current Version of the Scripting Layer (v3) and the newer Python for Android with the possibility to import custom modules. You can use every Android device able to run the SL4A. In general you be able to run it on every recent Android powered device. You will not harm your phone at all, even no root access is needed to run ROS on your device.

You can find our code, basic documentation and a video in our repository and on ROS.org.

A small video showcasing how to control a ROS-based cognitive intelligent environment via an Android-based smartphone is available here.

Links:

Neato XV-11 Laser Driver

| No Comments | No TrackBacks

BewareCats.jpgAnnouncement from Eric Perko of Case Western to ros-users

Hello folks,

I'm happy to announce a ROS driver for the Neato XV-11 Laser Scanner. ROS has had a driver for the Neato itself for sometime now, but this was only useful for going through the Neato's onboard computer. Well now you can just yank that XV-11 laser scanner out, strap it to your iRobot Create and feed it right on into the rest of ROS! The XV-11 scanner gives 360 pings at 1 degree increments at a rate of 5Hz and is useful from ~6cm to ~5m.

Chad and I have written up a number of tutorials to help people get started using this low-cost scanner including how to remove the laser from the vacuum and wire it to USB, how to get it up and running with our ROS driver and how to interpret the raw bytes if you want to parse the data yourself on a microcontroller.

Let us know if you have any problems/questions/comments.

Eric Perko, Chad Rockey
CWRU Mobile Robotics Lab

Announcement from Patrick Goebel to ros-users

Hello All,

I have released a ROS package for the Serializer microcontroller made by the Robotics Connection. I have been using and developing this package several months along with at least one other ROS user so I hope I have most of the egregious bugs fixed. Please let me know if you run into trouble with either the code or the Wiki page:

The Serializer has both analog and digital sensor pins, a PWM servo controller, and a PID controller that works well with the ROS navigation stack.

The underlying driver is written in Python and provides some convenience functions for specific sensors such as the Phidgets temperature, voltage and current sensors, the Sharp GP2D12 IR sensor, and the Ping sonar sensor. It also includes two PID drive commands: Rotate(angle, speed) and TravelDistance(distance, speed) for controlling the drive motors. Most of the Serializer functions have been implemented though a few have not been tested since I don't currently have some of the supported sensors. The functions that have not been tested are:

  • step (used with a bipolar stepper motor)
  • sweep (also used with a bipolar stepper motor)
  • srf04, srf08 and srf10 (used with a Devantech SRF04, SRF08 and SRF10 sonar sensors)
  • tpa81 (used with the Devantech TPA81 thermopile sensor)

The driver requires Python 2.6.5 or higher and PySerial 2.3 or higher. It has been tested on Ubuntu Linux 10.04.

Sincerely,
Patrick Goebel
The Pi Robot Project

AVR and ROS

| No Comments | No TrackBacks

Announcement from Adam Stambler of Rutgers to ros-users

Hello Folks,

I am proud to announce a new tool for using Arduinos and AVR processors in ROS projects.  avr_bridge allows the AVR processors to directly publish or subscribe to ROS topics. This allows everything from Arduinos to custom robot boards to be first class ROS components.  This package can be found in Rutgers' new rutgers-ros-pkg.

avr_bridge is meant to simplify the use of an Arduino and avr processors in a ROS based robot by providing a partial ROS implementation in avr c++. In hobbyist robotics, these microcontrollers are often used to read sensors and perform low level motor control. Every time a robot needs to interface with an AVR board, a new communication system is written. Typically they all use a usb-to-serial converter and either a custom binary or text based protocol. AVR bridge replaces these custom protocols with an automatically generated ROS communication stack that allows the AVR processors to directly publish or subscribe to ROS topics.

avr_bridge has already been deployed on Rutger's PIPER robot and in the communications layer for a Sparkfun imu driver. In the next few weeks, it will be deployed on our newest robot, the Rutgers IGVC entry as the communication layer on all of our custom, low level hardware.   By using avr_bridge to communicate with our PCs, we have cut down on redundant code and simplified the driver by allowing the avr_processor to directly publish msgs.  It is our hope that by extending ROS to the 8-bit microcontroller level we will see more open-source hardware that can be quickly integrated into cheap, custom robot platforms.

Cheers,
Adam Stambler
Rutgers University

Inverse Dynamics and Dynamics Markers

| No Comments | No TrackBacks

Daniel Hennes from Maastricht University spent his internship at Willow Garage modeling the dynamics of robotic manipulators using statistical machine learning techniques. He also created a useful visualization utility for ROS/rviz users that enables users to intuitively visualize the joint motor torques of a robot. Please watch the video above for an overview or read the slides below (download pdf) for more technical details. The software is available as open source in the inverse_dynamics and dynamics_markers packages on ROS.org.

ompl-gui_path-small.jpg

Announcement from Mark Moll to robotics-worldwide

Dear colleagues,

The Kavraki Lab is pleased to announce the initial release of the Open Motion Planning Library (OMPL). OMPL is a lightweight, thread-safe, easy to use, and extensible library for sampling-based motion planning. The code is written in C++, includes Python bindings and is released under the BSD license.

Here are some of OMPL's features:

  • Implementations of many state-of-the-art sampling-based motion planning algorithms. For purely geometric planning, there are implementations of KPIECE, SBL, RRT, RRT Connect, EST, PRM, Lazy RRT, and others. For planning with differential constraints there are implementations of KPIECE and RRT. Addition of new planners poses very few constraints on the added code.
  • A flexible mechanism for constructing arbitrarily complex configuration spaces and control spaces from simpler ones.
  • A general method of defining goals: as states, as regions in configuration space, or implicitly.
  • Various sampling strategies and an easy way to add other ones.
  • Automatic selection of reasonable default parameters. Performance can be improved by tuning parameters, but solutions can be obtained without setting any parameters.
  • Support for planning with the Open Dynamics Engine, a popular physics simulator.
  • Tools for systematic, large-scale benchmarking.

OMPL is available at http://ompl.kavrakilab.org.

OMPL is also integrated in ROS and will be available as a ROS package (see http://www.ros.org/wiki/ompl/unstable for documentation and download instructions). It will be fully integrated with the next major release of ROS (D-Turtle), which is scheduled for release in early 2011.

On top of the OMPL library, we have developed OMPL.app: a GUI for rigid body motion planning that allows users to load a variety of mesh formats that define a robot and its environment, define start and goal states, and play around with different planners. The OMPL.app code also comes with concrete command-line examples of motion planning in SE(2) and SE(3) (with and without differential constraints), using ODE for physics simulation, PQP for collision checking, and Assimp for reading meshes. OMPL.app is distributed under the Rice University software license (essentially, free for non-commercial use).

The Kavraki lab is fully committed to further developing OMPL for research and educational purposes. Please check out http://ompl.kavrakilab.org/education.html and contact us if you are interested in using OMPL in your class.

This project is supported in part by NSF CCLI grant #0920721 and a generous gift by Willow Garage.

Ed: here's a video from a year and a half ago showing some of Ioan Sucan's work with OMPL and the PR2

Path Optimization by Elastic Band

| No Comments | No TrackBacks

Visiting scholar Christian Connette from Fraunhofer IPA has just finished up his projects here at Willow Garage. Christian works on the Care-O-bot 3 robot platform, which shares many ROS libraries in common with the PR2 robot. While he was here at Willow Garage, he worked on implementing an "elastic band" approach (Quinlan and Khatib) for the ROS navigation stack. You can watch the video above to find out more about this work, or checkout the slides below for more technical details (download PDF). The software is available as open source in the eband_local_planner package on ROS.org.

Kinect-based Person Follower

| No Comments | No TrackBacks

Garratt Gallagher from CSAIL/MIT is at it again. Above, you can see his work on using the new OpenNI-based ROS drivers to get an iRobot Create to follow a person around. This code is based off of the skeleton tracker that comes with the NITE library.

For those of you figuring out how to get the NITE tracking data into ROS, take a look at Garratt's nifun package.

Garratt Gallagher from CSAIL/MIT has followed up his Kinect piano and hand detection hacks with a full "Minority Report" interface. The demo builds on the pcl library to do hand detection. You'll find Garratt's open-source libraries for building your own interface in mit-ros-pkg.

Update: MIT News release with more details

OpenCV 2.2 Released

| No Comments | No TrackBacks

OpenCV_Logo_with_text.pngOpenCV 2.2 has been released. Major highlights include:

  • Reorganization into several, smaller modules to better separate different OpenCV functionality, as well as experimental vs. stable code.
  • A new (alpha) GPU acceleration module, created with the support of NVidia
  • Android support by Ethan Rublee.
  • New features2d unified framework for keypoint extraction, computing the descriptors and matching them.
  • LatentSVM object detector, contributed by Nizhniy Novgorod State University (NNSU) team.
  • Gradient boosting trees model has been contributed by NNSU team.
  • Experimental Qt backend for highgui by Yannick Verdie. (docs).
  • Chamfer matching algorithm has been contributed by Marius Muja, Antonella Cascitelli, Marco Di Stefano and Stefano Fabri. See samples/cpp/chamfer.cpp.
  • A lot more of OpenCV 2.x functionality is now covered by Python bindings. These new wrappers require numpy to be installed.
  • Over 300 issues have been resolved. Most of the issues (closed and still open) are listed at https://code.ros.org/trac/opencv/report/6.

For more information, please see the complete change log.

pattern

Kurt Konolige and Patrick Mihelich have prepared a technical overview of the Kinect calibration provided in the kinect_calibration package for ROS. For those of you wishing to understand the technology behind the PrimeSense sensor, this provides a detailed overview of how depth is calibration -- and how we go about providing the calibration necessary for perception algorithms.

Kinect Calibration: Code Complete

| No Comments | No TrackBacks

KinectCalRviz.png

KinectCalRviz2.png

Patrick Mihelich has just finished creating a kinect_calibration package for the ROS kinect stack. This calibration procedure takes advantage of the IR image access we added last week, plus the helpful discovery by Alex Trevor to use a halogen lamp to provide the necessary illumination for the IR image (in lieu of the IR projector).

Tomorrow we hope to do a release of the new stack with this calibration code as well as a proper tutorial. If you're really curious, you can try these barebone instructions.

Kinect Piano and Hand Detection

| No Comments | No TrackBacks

Garratt Gallagher from CSAIL/MIT has created a fun Kinect hack: Kinect Piano! You hold your hand steady in front of the Kinect and then move your fingers to play individual notes. You can find the code in the piano package in mit-ros-pkg.

This hack also comes with a library to help you create your own -- Garratt wrote a kinect_tools package that uses pcl to implement a hand detector, which he demonstrates below:

AR.Drone and front video

| No Comments | No TrackBacks

Announcement from Associate Professor Chad Jenkins at Brown

Hi ros-users,

Just a quick heads up that brown-ros-pkg now includes ardrone_brown, an AR.Drone driver that supports the front camera.

The following video is a test of ardrone_brown for visually following an AR Tag, using ar_recog (ARToolkit) for tag recognition:

The tag following behavior is produced by the nolan package that is run with only gain modifications from our previous AR following video.

More to come.

-Chad

p.s. Dear Parrot, thanks for releasing Development License v2!

Announcement from Geoff Biggs of AIST on creating a standalone release of PCL

Hi all,

For all of us who plan to spend Thanksgiving working (perhaps we enjoy coding more than turkey, or it could just be that we don't live in the US...), there's a brand new release of PCL. This brings it up to 0.5. (I'm reliably informed that this makes it "half-way decent.")

In conjunction with the 0.5 release, we have just put the finishing touches on a standalone distribution of PCL. This means that those of you who are not using ROS can now use PCL in your applications.

It's still somewhat unrefined, and we need to work with some of the dependencies to make installation easier. Look for many improvements as we head towards 1.0!

Thanks again to all the contributors!

Geoff

Kinect with ROS moving forward quickly

| No Comments | No TrackBacks

The ROS/Kinect integration continues to progress quickly thanks to the efforts of the OpenKinect and ROS community. The initial ROS+Kinect contributors -- Alex Trevor, Ivan Dryanovski, Stéphane Magnenat, and William Morris -- have combined their efforts into a new ROS kinect stack. At Willow Garage, our engineers and researchers have also been working on the stack to improve the driver and integrate it with ROS libraries and tools. The community is now hard at work on solving problems like calibration, which will be important for using the Kinect in robotics. Feel free to sign up for the ros-kinect mailing list to keep up-to-date on the latest efforts.

We thought we'd make a quick video to show some of what's going on at Willow Garage with the Kinect. We've added features like multi-camera support and control of the Kinect motors to the popular libfreenect library. We're also working on making some fun Kinect hacks of our own -- watch until the end of the video to see where we are with those. We look forward to seeing your videos as well.

Kinect drivers for ROS coming together

| No Comments | No TrackBacks

Hector Martin's libfreenect open-source driver for the Microsoft Kinect has lead to several efforts within the ROS community to create Kinect drivers for ROS. Stéphane Magnenat (ETH Zurich) and Alex Trevor were the to port libfreenect to ROS. The CCNY Robotics Lab has now added their kinect_node package, which adds documentation, depth calibration, and example bag files. It's great to see how their efforts have contributed to each other, as well as to the broader libfreenect community.

The Kinect is obviously an important sensor for robotics. It's a sensor that we can all own at home, instead of having to share in a lab. That has already has enabled so many people to quickly work together on an open-source driver and it will be great to see what the community can build together next.

Probabilistic Grasp Planning

| No Comments | No TrackBacks

cross-posted from willowgarage.com

One of the challenges that robots like the PR2 face is knowing how to grasp an object. We have years of experience to help us determine what objects are and how to grasp them. We can tell the difference between a mug, and wine glass, and a bowl, and know that they each should be handled in a different way. For robots, the world is not as certain, but there are approaches they can take that let them interact in an uncertain world.

This summer, Peter Brook from the University of Washington wrote a grasp planning system which lets robots successfully pick up objects, even in cases where they make incorrect guesses about what the object is. This planner uses a probabilistic approach, where the robot uses potentially incomplete or noisy information from its sensors to make multiple guesses about the identity of the object it is looking at. Based on how confident the robot is in each of the possible explanations for the perceived data, it can select the grasps that are most likely to work on the underlying object.

First, the planner builds up a set of representations for the sensed data; some are based on the best guesses provided by ROS recognition algorithms, and some use the raw segmented 3D data. For each representation, it uses a grasp-planning algorithm to generate a list of possible grasps. It then combines the information from all these sources, sorting grasps based on their estimated probability of success across all the representations. For grasp planners running on known object models, it can also use pre-computed grasps that speed up execution time.

This probabilistic planner allows the PR2 robot to cope with uncertainty and reliably grasp a wider range of objects in unstructured environments. It is also integrated into the ROS object manipulation pipeline so that others can experiment and improve upon it. For more information, please see Peter's slides below (download PDF), or checkout the source code in the probabilistic_grasp_planner package on ROS.org.

STOMP Motion Planner

| No Comments | No TrackBacks

cross-posted from willowgarage.com

Robot motion planning has traditionally been used to avoid collisions when moving a robot arm. Avoiding collisions is important, but many other desirable criteria are often ignored. For example, motions that minimize energy will let the robot extend its battery life. Smoother trajectories may cause less wear on motors and can be more aesthetically appealing. There may be even more useful criteria, like keeping a glass of water upright when moving it around.

stomp_pole.pngThis summer, Mrinal Kalakrishnan from the Computational Learning and Motor Control Lab at USC worked on a new motion planner called STOMP, which stands for "Stochastic Trajectory Optimization for Motion Planning". This planner can plan paths for high-dimensional robotic systems that are collision-free, smooth, and can simultaneously satisfy task constraints, minimize energy consumption, or optimize other arbitrary criteria. STOMP is derived from gradient-free optimization and path integral reinforcement learning techniques (Policy Improvement with Path Integrals, Theodorou et al, 2010).

The accompanying video shows the STOMP planner being used to plan motions for the PR2 arm in simulation and a real-world setup. It shows the ability to plan motions in real-world environments, while optimizing constraints like holding the cans upright at all times. Ultimately, the utility of this motion planner is limited only by the creativity of the system designer, since it can plan trajectories that optimize any arbitrary criteria that may be important to achieve a given task.

For more information, please see Mrinal's slides below (download pdf), or checkout the code in the stomp_motion_planner package on ROS.org. This package builds on the various packages in the policy_learning stack, which was written in collaboration with Peter Pastor. You can also checkout Mrinal's work from last summer on the CHOMP motion planner.

ROS/ASEBA Bridge

| No Comments | No TrackBacks

marxbot-complete-detour.jpgStéphane Magnenat from the Autonomous System Lab at ETH Zurich has announced a ROS/ASEBA bridge

Dear list,

Thanks to your quick and precise answers, I have programmed a bridge between ASEBA and ROS:

http://github.com/stephanemagnenat/asebaros

This bridge allows to load source code, inspect the network structure, read and write variables, and send and receive events from ROS.

This brings ROS to the following platforms:

  • Mobots' marxbot, handbot and smartrob
  • e-puck

Kind regards,
Stéphane

cross-posted from willowgarage.com

This summer, Hae Jong Seo, a PhD student from the Multidimensional Signal Processing Research Group at UC Santa Cruz, worked with us on object and action recognition using low-cost web cameras. In order for personal robots to interact with people, it is useful for robots to know where to look, locate and identify objects, and locate and identify human actions. To address these challenges, Hae Jong's implemented a fast and robust object and action detection system using features called locally adaptive regression kernels (LARK).

LARK features have many applications, such as saliency detection. Saliency detection determines which parts of an image are more significant, such as containing objects or people. You can then focus your object detection on the salient regions of the image in order to detect more quickly. Saliency detection can be extended to "space-time" for use with video streams.

LARK features can also be used for generic object and action detection. As you can see in the video, objects such as door knobs, the PR2 robot, and human faces and be detected using LARK. Space-time LARK can also detect human actions, such as waving, sitting down, and getting closer to the camera.

For more information, see the larks package on ROS.org or see Hae Jong's slides below (download PDF). You can also consult Peyman Milanfar's publications for more information on these techniques.

crossposted from willowgarage.com

Bastian Steder, a PhD student from the Autonomous Intelligent Systems Group at the University of Freiburg, Germany, spent the summer at Willow Garage implementing an object recognition system using 3D point cloud data. With 3D sensors becoming cheaper and more widely available, they are a valuable tool for robot perception. 3D data provides extra information to a robot, such as distance and shape, that enables different approaches to identifying objects in the world. Bastian's work focused on using databases of 3D models to identify objects in this 3D sensor data.

The main focus for Bastian's work was on the feature-extraction process for 3D data. One of his contributions was a novel interest keypoint extraction method that operates on range images generated from arbitrary 3D point clouds. This method explicitly considers the borders of the objects identified by transitions from foreground to background. Bastian also developed a new feature descriptor type, called NARF (Normal Aligned Radial Features), that takes the same information into account. Based on these feature matches, Bastian then worked on a process to create a set of potential object poses and added spatial verification steps to assure these observations fit the sensor data.

The full system can identify the existence and poses of arbitrary objects, of which we have a point cloud model in a very efficient manner, using only the geometrical information provided by the 3D sensor. Code for Bastian's work, including object recognition and feature extraction, has been integrated with PCL, which is a general library for 3D geometry processing in development at Willow Garage. To find out more, check out the point_cloud_perception stack on ROS.org. For detailed technical information, you can check out Bastian's presentation slides below (download PDF).

Making Manipulation More Mobile

| No Comments | No TrackBacks

crossposted from willowgarage.com

Adam Harmat from McGill University worked on three projects this summer to make the PR2 more dexterous when manipulating objects: a monitoring system for arm movement, a persistent 3D collision map, and a multi-table manipulation application. All of these projects demonstrated how increased knowledge of its environment is necessary for improving PR2's mobile manipulation capabilities.

The arm-monitoring system uses head-mounted stereo cameras to detect new obstacles. While the arm moves, the PR2 looks at locations that are a few seconds ahead of the arm's current position. Any detected obstacles are added to a collision map, and, if a future collision is anticipated, the arm stops and waits. If the new obstacle doesn't move, the PR2 will attempt to move around it.

The collision map was improved to store information about everything the robot has previously seen. This allows the PR2 to perform tasks that require it to relocate as it maintains knowledge about places it currently cannot see. This new collision map is based on Octomap, an open source package from the University of Freiburg. The octree structure of Octomap is more compact and also enables the storing of probabilistic values.

No one wants a clumsy robot. As a result of these projects, the PR2 is able to maintain more knowledge about its local environment, and is able to keep its arms from bumping into objects. Adam developed a demo application to demonstrate these new capabilities.

In his multi-table manipulation demo, the PR2 continuously finds and moves objects between separate tables. This application is integrated with the ROS navigation stack to determine pickup locations and navigate between tables. Adam's multi-table application demonstrates how planning with the persistent collision map can be integrated with base movement and local task-execution into a complete system.

For more information, you can view Adam's presentation slides below (download as PDF), or checkout the move_arm_head_monitor and the multi_table_detector packages on ROS.org.

Actionlib for roslua

| No Comments | No TrackBacks

Tim Niemueller has announced an actionlib implementation for his roslua client library

Hi ROS users.

We have released another piece of the Lua integration for ROS, this time it's actionlib_lua. It has been developed at Intel Labs Pittsburgh as part of my research stay this year working with Dr. Siddhartha Srinivasa on the Personal Robotics project. You can the source code at http://github.com/timn/actionlib_lua. It requires the most recent version of roslua that you can get from http://github.com/timn/roslua.

It implements most features of actionlib, both client and server side. Additionally it allows for some small optimizations, i.e. you can ignore the feedback and cancellation topics if not required or supported. It interacts well with the original actionlib for C++ and Python and we are using it on HERB.

As always, feedback is welcome,
Tim

NDI Polaris driver in kul-ros-pkg

| No Comments | No TrackBacks

Dominick Vanthlenen of kul-ros-pkg announced the release of a driver for NDI's Polaris (R) 3D measurement system along with packages for using the driver with both Orocos and ROS

For all of you having a Polaris (R) 3D measurement system: a ndi_hardware stack has been released! This enables you to use your measurement system on a Linux system and let it publish tf frames.

nick

Jeff Rousseau announced a basic URDF model for the iRobot Create as well as new aptima-ros-pkg code repository

Hi all,

I've put together a basic URDF for the iRobot Create platform. It's available for download through our new svn repo:

svn checkout http://aptima-ros-pkg.googlecode.com/svn/trunk/irobotcreatedescription

(Note: currently it relies on the erratic_gazebo_plugins pkg to implement its diff-drive)

Comments, bug reports and patches are appreciated

I plan to add/fix in the not too distant future:

  • a working bumper
  • tweak mass/friction params to be more realistic (they're fudged at the moment)
  • fix intermittent 'wobble' when transitioning between translation and rotations (friction coeff issue?)

enjoy,
Jeff

Group photo

The Willow Garage PR2 robots have been out at the PR2 Beta Sites for only a few short months and they have been busy with research projects, developing new software libraries for ROS, and creating Youtube hits. The first PR2 Beta Program Conference call was recently held to highlight this work, and the list of software that they have released as open source is already impressive.

A partial list of this software below so that all the ROS users and researchers can try it out and be involved. You'll find many more libraries in their public code repositories, and there is much more coming soon.

Georgia Tech

KU Leuven

JSK

  • EusLisp: now available under a BSD license
  • ROS/PR2 integration with EusLisp: roseus, pr2eus, and euscollada
  • jsk_ros_tools: includes rostool-alias-generator (e.g. rostopic_robot1) and jsk-rosemacs (support for anything.el)

TUM

  • knowrob: tools for knowledge acquisition, representation and reasoning
  • CRAM: reasoning and high level control for Steel Bank Common Lisp (cram_pl) and executive that reasons about locations (cram_highlevel)
  • prolog_perception: logically interact with perception nodes
  • pcl: contributions include pointcloud_registration, pcl_cloud_algos, pcl_cloud_tools, pcl_ias_sample_consensus, pcl_to_octree, mls

Stanford

Berkeley

  • towel_folding: version of Towel Folding from pre-PR2 Beta Program that relies on two Canon G10 cameras mounted on chest. Uses optical flow for corner detection.
  • LDA-SIFT: recognition for transparent objects
  • Utilities:
    • pr2_simple_motions: Classes for easy scripting in Python of PR2 head, arms, grippers, torso, and base
    • visual_feedback: Streamlined image processing for 3d point extraction and capturing images
    • stereo_click: Click a point in any stereo camera feed and the corresponding 3d point is published
    • shape_window: Provides a highgui-based interface for drawing and manipulating 2D shapes.

MIT

  • iSAM: Incremental Smooth and Mapping released as LGPL.

USC

  • OIT: Overhead interaction toolkit for tracking robots and people using an overhead camera.
  • deixis: Deictic gestures, such as pointing

Freiburg

  • articulation: (stable) Fit and select appropriate models for observed motion trajectories of articulated objects.
  • Contributions to pcl, including range image class and border extraction method

Bosch

  • wviz: Web visualization toolkit to support their PR2 Remote Lab. Bosch has already been able to use their Remote Lab to collaborate with Brown University, and Brown University has released a rosjs to access ROS via a web browser.

Penn

Zeroconf package for ROS

| No Comments | No TrackBacks

I Heart Robotics has released a zeroconf package for ROS the enables advertising of ROS masters using Zeroconf/Avahi. This provides configuration-less setup to applications like I Heart Robotics's RIND and will also be a useful tool for multi-robot communication.

RL-Glue for ROS

| No Comments | No TrackBacks

Sarah Osentoski of Brown's RLAB recently announced a beta version of a ROS to RL-Glue bridge for reinforcement learning

Brown is pleased to announce our beta version of rosglue. rosglue is a bridge between ROS and RL-Glue, a standard reinforcement learning (RL) framework.

rosglue is designed to enable RL researchers and roboticists work together rather than having to reimplement existing methods in both fields. A goal of rosglue is to allow ROS users to use RL algorithms provided by RL researchers and, likewise, to allow RL researchers to more easily use robots running ROS as a learning environment. rosglue allows a robot running ROS to become an RL-Glue environment allowing RL-Glue compatible agents to control the robot. A high level visualization of the framework can be seen here.

rosglue uses a yaml configuration file to specify the topics and services and the learning problem. rosglue automatically subscribes to the topics and services specified in the file. rosglue sends actions selected to the RL-Glue to the robot using the appropriate topic or service. It then creates observations from specified topics for the RL-Glue agent.

rosglue is currently available for download from the brown-ros-pkg repository via:

svn co https://brown-ros-pkg.googlecode.com/svn/trunk/experimental/rlrobot/rosglue rosglue

and preliminary documentation can be found here:

http://code.google.com/p/brown-ros-pkg/wiki/rosglue

Robot Learning and Autonomy @ Brown (RLAB)

Canonical and polar scan matcher packages

| No Comments | No TrackBacks

The CCNY Robotics Lab, which was recently featured in this CityFlyer blog post, has just announced the release of two packages for laser scan registration.

Dear ROS-Users,

The CCNY Robotics Lab is pleased to announce the release of two packages for laser scan registration. canonical_scan_matcher is a wrapper around Andrea Censi's "Canonical Scan Matcher" [1]. polar_scan_matching is a wrapper around Albert Diosi's "Polar Scan Matching" [2].

Both packages estimate the displacement of a robot by comparing consecutive Laser Scan messages. They can be used without providing any estimate for the displacement of the robot between the scans. In this way, they can serve as an odometric estimate for robots that don't have any other odometric system. Alternatively, a displacement estimate can be provided as input to the scan matchers, in the form of an Imu message or a tf transform, in order to produce better (or faster) scan matching results.

While the two scan matchers use different algorithms and parameters, the ROS wrappers are identical in terms of topics/frames/tf's, making the two packages interchangeable.

Documentation and usage instructions can be found at the respective wiki pages:

As usual, we have provided a small demo bag file with laser data and a launch file that can be used to view the packages in action. Each wiki page also has a video of what the output of the demo should look like.

We hope you find the scan matchers useful, and we extend our thanks to the authors of the original implementations.

Ivan Dryanovski
William Morris
The CCNY Robotics Lab

[1] A. Censi, "An ICP variant using a point-to-line metric" Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2008
[2] A. Diosi and L. Kleeman, "Laser Scan Matching in Polar Coordinates with Application to SLAM " Proceedings of 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, August, 2005, Edmonton, Canada

RIND: ROS status INDicator

| No Comments | No TrackBacks

rind.png

In addition to WowWee drivers and OpenCV tutorials, I Heart Robotics has just released a great Ubuntu panel tool called RIND, which stands for Robot/ROS Status Indicator. You can use it to manage your local roscore as well get information on ROS nodes and topics. Checkout the documentation or read the announcement for more information.

ROS Client Library for Lua

| No Comments | No TrackBacks

Tim Niemueller has announced a ROS client for Lua

Hello ROS users.

During the last weeks we have developed a Lua-based API to write ROS nodes in the Lua programming language. It allows for communicating with other nodes and participate in the ROS universe. It has been developed at Intel Labs Pittsburgh as part of my research stay this year working with Dr. Siddhartha Srinivasa on the Personal Robotics project.

Some highlights of the implementation:

  • Completely written in Lua, no wrappers
  • Implements topic and service communication
  • Reads message specifications on the fly and generates appropriate data structures at run-time, avoiding offline code generation
  • Fully documented API
  • only about 2800 lines of code (ohcount)
  • Test scripts for all features and simple examples

The implementation benefits from the inherent single-threading in Lua, meaning that everything is processed in a single main loop. This is one of the major factors for its simplicity. No attempts have been made to incoporate Lua add-ons that would provide true multi-threading. The implementation has by far not the same versatility as the Python or C++ API (and we do not aim at that), but it does provide a very simple way to interact with ROS and do this directly without a middle-man from Lua.

The endeavor has been conducted to prepare for porting the Fawkes behavior engine to ROS, which is a framework for developing robot behavior employing Lua as its scripting language.

We would be delighted if the community would take a look at the implementation and provided some feedback. Once it has reached a certain stability and the feature set has been expanded to an acceptable coverage we would like to propose this for inclusion in the experimental package tree.

You can find the source code at http://github.com/timn/roslua. Some documentation on how to get started is provided in the README file.

Regards,
Tim

Three Robots: One ROS Node

| No Comments | No TrackBacks

post by Trevor Jay of Brown to ros-users

Hi ROS-Users!

Recently thanks to the very nice people at Bosch and their remote lab efforts, we were able play around with an actual PR2. We wanted to share the following video of a single ROS node (svn co https://brown-ros-pkg.googlecode.com/svn/trunk/experimental/nolan nolan) running completely unmodified on three very different robots (including the PR2).

Each of the robots is using ar_recog, our ROS-compatible wrapper around ARToolkit to localize ARTags. A simple PID controller then directs them in following. As the Nao does not recognize Twist msgs, we have a simple ten-liner rebroadcasting them as Walk msgs, but even here the the original control node is running unmodified.

Anyone interested in the vision stack can check it out at the brown ros pkg repository.

Thanks everyone in the community who's helping to bring about this level of portability.

_RLAB (Robot Learning and Autonomy @ Brown)

New College Dataset parser for ROS

| No Comments | No TrackBacks

Ivan Dryanovski of the CCNY Robotics Lab has announced a New College Data parser for ROS. The announcement is below.

Dear ROS community,

As you may know, the New College Data is a freely available dataset collected from a robot completing several loops outdoors around the New College campus in Oxford. The data includes odometry, laser scan, and visual information.

We have recently released a ROS parser for the New College Dataset. The parser reads in the .alog data files provided on the NCD website and broadcasts ROS messages in real time. The parser also broadcasts transforms between the robot's base frame and the sensor frames, as defined in the NCD paper.

Details about the package, as well as a demo video of the NCD data being played back in rviz, are available here:

http://www.ros.org/wiki/ncd_parser

We hope you find this useful.

Ivan Dryanovski
CCNY Robotics Lab
The City College of New York

ROS and OpenRTM-aist

| No Comments | No TrackBacks

Geoffrey Biggs has released a patch that integrates ROS seamlessly into OpenRTM-aist. OpenRTM-aist users can download a patch that adds in ROS transport.

Although I guess it's not a common thing yet, there have been murmurings for quite a while now here in Japan about a desire to be able to use OpenRTM-aist and ROS together. We would gain the huge range of functional software and the persistent channel-based communications of ROS, and keep the strong life-cycle and execution management of OpenRTM-aist.

So here's a patch for OpenRTM-aist that does exactly that.

This patch adds a new transport type to OpenRTM-aist specifically for communicating across ROS channels. No doubt someone will find the ability to use a persistent channel for communication useful, but the main benefit is that it gives nearly-seamless communications between components written for OpenRTM-aist and nodes written for ROS. Your network of distributed components/nodes no longer has to be in just one framework.

There are no wrappers involved. It's all native communication using the same ROS libraries as you would use in a pure-ROS system - no translation layers means maximum efficiency. You create a port type for the ROS transport, and off you go. If you already know ROS, you'll feel right at home using the ports.

The one caveat is why I say nearly-seamless: we still don't have a unified set of types (also, there are some issues with the typing system in OpenRTM-aist that we're working to sort out). Fortunately, the types issue is a hot topic amongst framework designers at the moment, so I hope we will have solved that problem before too long. :)

I have attached both the patch, for OpenRTM-aist-1.0.0, and a set of examples for each port type (publisher/subscriber/client/server). I hope to get a web page up on the OpenRTM-aist site shortly with a more detailed explanation of usage; for now, the examples and the doxygen comments in the source will point you in the right direction - it's all pretty simple.

Comments, suggestions, and improvements are welcome.

1000+ ROS Packages

| No Comments | No TrackBacks

ros_reposb.png

The ROS community has grown an amazing amount this year. As the Robots Using ROS has illustrated, there are all types of robots using ROS, from mobile manipulators, to autonomous cars, to small humanoids. As the types of robots has increased, so too has the variety of software you can use with ROS, whether it be hardware drivers, libraries like exploration, or even code for research papers. This diversity has allowed all types of developers, including researchers, software engineers, and students, to participate in this growing community.

Today we officially crossed the 1000 ROS package milestone. This is due in no small part to the many new ROS repositories that have come online this year. We are now tracking 25 separate ROS repositories that are providing open source code, including repositories from:

We're excited to see the expansion of such an amazing and vibrant ROS community. Thank you all for taking part.

karto.png

We're very excited to announce that the mapping library from SRI International's Karto Robotics is now open source with an LGPL license. This mapping library contains a scan matcher, pose graph, loop detection, and occupancy grid construction -- all important building blocks for 2D navigation. When combined with Willow Garage's Sparse Pose Adjustment (SPA) for optimization (in the sba ROS package), it forms a complete stand-alone library for robust 2D mapping.

The Karto mapping library is being hosted on code.ros.org, and we've already integrated it with the ROS navigation stack. The Karto team recently benchmarked various SLAM systems on the RAWSEEDS dataset and found that newest Karto 2.0 with SPA is slightly less precise than Karto 1.1, but Karto 2.0 was more consistent and faster [1]. The maximum error with Karto 2.0 performed as well as a localization-based solution (MCL). A paper describing the SPA technique is due to be published later this year.

We'd like to thank the Karto team for all the hard work that went into making this happen. You can visit kartorobotics.com to find out more about Karto as well as contact them regarding Karto integration services.

[1]: "Comparison of indoor robot localization techniques in the absence of GPS, Vincent, Regis, Limketkai, Benson, and Eriksen, Michael, In Proceedings of SPIE Volume: 7664 Detection and Sensing of Mines, Explosive Objects, and Obscured Targets XV (Proceedings Volume) of Defense, Security, and Sensing Symposium.

Karto-WG-640w.jpg

Karto SRI/Willow Garage Integration Team: Kurt Konolige, Benson Limketkai, Michael Eriksen, Regis Vincent, Brian Gerkey, Eitan Marder-Eppstein

OpenCV 2.1 Released

| No Comments | No TrackBacks

GrabCut

Above: Grabcut example

OpenCV 2.1 has been released. In addition to many improvements underneath the hood, OpenCV 2.1 adds the Grabcut (C. Rother, V. Kolmogorov, and A. Blake) image segmentation algorithm. The stereo libraries have also been updated with new and improved algorithms, including H. Hirschmuller's semi-global stereo matching algorithm (SGBM).

Mac OS X users will be happy to know that OpenCV has been updated for Snow Leopard. You can now build it as a 64-bit library and highgui has been updated with new Cocoa and QTKit backends (thanks Andre Cohen and Nicolas Butko). Windows users can also build on 64-bit using MSVC 2008 or mingw64.

There are numerous other improvements with this release. We encourage users to check the change list to find out more.

On an administrative note, OpenCV has migrated from SourceForge to code.ros.org to take advantage of faster servers. The ticket tracker has also moved.

The Minoru is an inexpensive stereo webcam that can now be used with ROS. Bob Mottram recently updated the v4l2stereo library so you can now use both the library and the Minoru camera easily with ROS, as the video below demonstrates. The v4l2stereo library also integrates well with OpenCV.

You can find instructions on the Sentience site on how to easily remove the commercial packaging around the Minoru sensor, as well as instructions on how to use it with ROS.

Links:

HARK on Texai

| No Comments | No TrackBacks

When talking with people face-to-face, we may experience the "Cocktail Party Effect": even in a crowded, noisy room, we can use our binaural hearing to focus our listening on a single person speaking. With current telepresence technologies, however, we lose this important ability. Thankfully, there are already researchers giving us new tools for effectively bridging these remote distances.

Kyoto University's Professor Hiroshi Okuno and Assistant Professor Toru Takahashi, Honda Research Institute-Japan's Dr. Kazuhiro Nakadai, and four Kyoto University and Tokyo Institute of Technology graduate students spent a week at Willow Garage, integrating HARK with a Texai telepresence robot. HARK stands for Honda Research Institute-Japan (HRI-JP) Audition for Robots with Kyoto University. The robot audition system provides sound source localization, sound source separation, acoustic feature extraction, and automatic speech recognition.

The HARK system integrated well with ROS and our Texai. The Texai was outfitted with a green salad bowl helmet embedded with eight microphones, and there is now a hark package for ROS. Using this setup, their team put together three demos showing off the potential for telepresence technologies.

In the first demo, four people, including one present through a second Texai, talk over each other while the HARK-Texai separates out each voice. The second demo shows that sound is localized and that sound direction and power can be displayed in a radar chart. The final presentation puts these two demos together into a powerful new interface for the remote operator: the Texai pilot can determine where various sounds and voices are coming from, and select which sound to focus on. The HARK system then provides the pilot with the desired audio, cutting out any background noise or additional voices. Even in a crowded room, you can have a one-on-one conversation.

HARK came out of close collaboration between HRI-JP and Kyoto University, and Professor Okuno's passion to make computer/robot audition helpful for the hearing impaired. HARK is provided free and open source for research purposes and can be licensed for commercial applications.

Hizook on Robotis servos and ROS

| No Comments | No TrackBacks

dynamixel.jpg

Travis Deyle of Hizook/Healthcare Robotics Lab posted a nice overview of Robotis Dynamixel servos, including code for using the servos with gt-ros-pkg's robotis package. The Healthcare Robotics Lab uses these servos to construct their tilting Hokuyo laser rangefinder. You'll also find them in robots like Prairie Dog for their Crustcrawler arm.

Article

Logging and playback is one of the most critical features when developing software for robotics. Whether you're a hardware engineer recording diagnostic data, a researcher collecting data sets, or a developer testing algorithms, the ability to record data from a robot and play it back is crucial. ROS supports these needs with "Bag files" and tools like rosbag and rxbag.

rosbag can record data from any ROS data source into a bag file, whether it be simple text data or large sensor data, like images. The tool can also record data from programs running on multiple computers. rosbag can play back this data just as it was recorded -- it looks identical. This means that data can be recorded from a physical robot and used to create virtual robots to test software. With the appropriate bag file, you don't even need to own a physical robot.

There are a variety of tools to help you manage your stored data, such as tools for filtering data and updating data formats. There are also tools to manage research workflows for sending bag files to Amazon's Mechanical Turk for dataset labeling. For data visualization, there is the versatile rxbag tool. rxbag can scan through camera data like a movie editor, plot numerical data, and view the raw message data. You can also write plugins to define viewers for your own data types.

You can watch the video to see rosbag and rxbag in action. Both of these tools are part of the ROS Box Turtle release, which you can download from ROS.org.

In addition to core robotics libraries, like navigation, the ROS Box Turtle release also comes with a variety of tools necessary for developing robotics algorithms and applications. One of the most commonly used tools is rviz, a 3-D visualization environment that is part of the ROS Visualization stack.

Whether it's 3D point clouds, camera data, maps, robot poses, or custom visualization markers, rviz can display customizable views of this information. rviz can show you the difference between the physical world and how the robot sees it, and it can also help you create displays that show users what the robot is planning to do.

You can watch the video above for more details about what the ROS rviz tool has to offer, and you can read documentation and download the source code at: ros.org/wiki/rviz.

We're pleased to announce that all of the packages within the Brown Ros Pkg collection have been updated for ROS-1.0.X compatibility. The Brown automated install script has also been updated to reflect these changes as well. This update should fix any problems caused by using the (formerly) 0.9.0 release with an updated roscore.

We encourage anyone interested to visit brown-ros-pkg . We currently have a driver for the iRobot Create, a basic driver for the Aldebaran Nao, and some basic vision packages.

Additions and improvements include:

probe

A ROS webcam capture node new to this release. Probe leverages Gstreamer, making it compatible with almost every Linux camera and video system available. In addition Gstreamer's software video processing can be used to emulate advanced features (e.g. white-balancing) even for cameras that don't have the appropriate v4l hardware support.

teleop_twist_keyboard

A simple keyboard-based teleop interface inspired by teleop_base. It's only dependencies are on the necessary geometry messages.

As always, we're interested in the communities feedback and suggestions.

_Trevor

Arduino and cmucam3 support for ROS

| No Comments | No TrackBacks

Andrew Harris just announced support for the Arduino and cmucam3 with ROS packages in his own ajh-ros-pkg repository. The announcement is below.*

Hello, I have released the source code to my ROS packages for communicating with the Arduino and the cmucam3.

pmad (Arduino)

You can get the code from:

https://ajh-ros-pkg.svn.sourceforge.net/svnroot/ajh-ros-pkg/trunk/pmad/

There is a subdirectory in there called "arduino" that contains a sketch.txt file. This is the sketch you must load into the GUI and upload to the Arduino. Once you do that, you can start the PMAD service:

rosrun pmad pmad_service.py

Note that you'll have to have a "tty" ROS parameter set up if the USB does not connect as /dev/ttyUSB0. You might have to set tty to "/dev/ttyUSB1" for example. Once you set up the service, you can toggle the digital pins with:

rosservice call pmad_switch_control 4 0

This for example will set digital pin 4 to LOW. Note the status packet below only returns the state of digital pins 4, ..., 7.

In addition to this, there is a service status that returns the current A/D outputs on analog pins 0, ..., 3, digital pins 4, ..., 7, and a count of the number of commands executed.

If you want to get the status to be a "topic", there is an application

rosrun pmad pmad_status_publisher.py

That will publish the status data once per second. The status message is defined in msg/Status.msg.

BTW pmad stands for power management and distribution, I use solid state relays on the outputs of the digital pins.

cmucam_png (cmucam3)

In terms of the cmucam_png node, it acquires a png file from the cmucam about once every twenty seconds and publishes it as a compressed image. To be honest I haven't used it in a while as I've switched to a firewire USB camera. The image is a PNG file. It takes ~20 seconds per image because they come over a 115,200 bps serial line. They are 352x287 pixels in size.

https://ajh-ros-pkg.svn.sourceforge.net/svnroot/ajh-ros-pkg/trunk/cmucam_png/

Right now for some reason I get compiler errors compiling image_view so I can't try the image_view to make sure the image is getting pushed around. However I have recently tested that I can receive an image from the cmucam and save it to disk as a png by just adding a couple lines to save the received png to disk. I am just having trouble pushing the image around ROS, but it might just be my setup. (It worked in the past ;) )

For the cmucam application, there is also a binary that must be downloaded to the camera. The source, etc, for this is in the directory "png-robin". In order to get this working you should have a working installation of the cmucam3 development environment, and be able to compile the examples in the cc3/projects directory. Once you can do this, and download these examples to the camera, you can make a new directory in that examples directory called png-robin and copy the contents of this directory into that one. The makefile will then work and compile the application. I have also checked in a hex file that you can use if you don't want to compile the application yourself. But you'll still have to download the hex file with lpc21isp.

Hopefully I have made the packages correctly, but these are the first ones I've tried to make. If you have any questions on either of these nodes, of if they don't seem to work, let me know!

thanks, -andrew

crossposted from willowgarage.com

Peter Pastor, a PhD student at USC, spent the past three months developing software that allows the PR2 to learn new motor skills from human demonstration. In particular, the robot learned how to grasp, pour, and place beverage containers after just a single demonstration. Peter focused on tasks like turning a door handle or grasping a cup -- tasks that personal robots like PR2 will perform over and over again. Instead of requiring new trajectory planning each time a common task is encountered, the presented approach enables the robot to build up a library of movements that can be used to execute these common goals.  For this library to be useful, learned movements must be generalizable to new goal poses. In real life, the robot will never face the exact same situation twice. Therefore, the learned movements must be encoded in such a way that they can be adapted to different start and goal positions.

Peter used Dynamic Movement Primitives (DMPs), which allow the robot to encode movement plans. The parameters of these DMPs can be learned efficiently from a single demonstration, allowing a user to teach the PR2 new movements within seconds. Thus, the presented imitation learning set-up allows a user to teach discrete movements, like a grasping, placing, and releasing movement, and then apply these motions to manipulate several objects on a table. This obviates the need to plan a new trajectory every single time a motion is reused. Furthermore, the DMPs allow the robot to complete its task even when the goal is changed on-the-fly.

You can find out more about the open source code for Peter's work here, and check out his presentation slides below (download PDF). For more about Peter's research with DMPs and learning from demonstration, see "Learning and Generalization of Motor Skills by Learning from Demonstration", ICRA 2009.

nao_rviz.png

Update: Documentation is available on ros.org at http://www.ros.org/wiki/alufr-ros-pkg

I'm happy to announce v0.1, the first proper release of Freiburg's Nao Stack located at: http://code.google.com/p/alufr-ros-pkg/

You can check out the trunk (svn) from:
http://alufr-ros-pkg.googlecode.com/svn/trunk/

or download the stack package from:
http://code.google.com/p/alufr-ros-pkg/downloads/list

Changes are:

  • improved torso odometry
  • nao_ctrl now also transmits the state of Nao's onboard IMU
  • new nao_description package makes Nao's complete joint state, transformations and visualization available through robot_state_publisher
  • launch files for convenient start up

Some basic instructions are available at http://code.google.com/p/alufr-ros-pkg/, but I might also move them to ros.org if that's the more appropriate place. I would be happy to hear if all is working for you (or not), or how to make things more ROS-compliant.

Best regards, Armin

crossposted from willowgarage.com

Ethan Dreyfuss, who recently received a master's degree from Stanford University, is continuing his work here on autonomous person-following and dataset collection and annotation. The former project provides a useful building block for a wide variety of tasks. Consider a robot that helps you carry groceries. This robot is vastly more useful if it can carry your bags to the house without requiring teleoperation; the robot can simply track you and follow behind. At a high level, person-following comprises two principal tasks: person tracking and navigation.

The approach developed by Ethan and Caroline Pantofaru fuses a face detector with two weak person trackers: one for legs, and one for 3D blobs at person-height. None of these approaches is individually effective enough to provide robust tracking, but their strengths are complementary. The face detector is effective when the person is close to, and directly facing the robot. While the leg tracker provides high accuracy when multiple people are present, it is often confused by non-human obstacles and can therefore not work reliably from afar. Conversely, the height-based blob tracker can effectively track from further away, yet it is easily confused by groups of people. By combining techniques, Ethan and Caroline were able to develop a more robust person-tracking tool.

Once the robot can track a designated person, the information is passed on to the navigation stack. This same navigation software was used to complete Milestone 2, with some improvements made to help deal more quickly and robustly with dynamically-moving obstacles such as people.

In addition to the person-following project, Ethan is contributing to the collection and labeling of a large dataset of people in an indoor office environment. One of the major drivers of computer vision research is the availability of high-quality labeled data. The bulk of existing person datasets exclude indoor environments, and instead focus on outdoor pedestrians. Indoor environments present numerous challenges for person detection, including poor lighting and environmental clutter. By automating as much as possible, the process of both collecting (using the robot) and labeling (using Amazon's Mechanical Turk and Alex Sorokin's CV Web Annotation Toolkit), Ethan's team will be able to provide a large, compelling dataset to encourage other researchers to tackle these challenging problems.

Ethan also picked up a number of side projects including rapid neighborhood computation on point clouds, and implementing a package that uses the open-source video codec Theora to allow low-bandwidth video streaming within ROS.

Brown has released a new version of its iRobot Create driver that exposes much more of the underlying functionality of the Create. For example, the actual sensor levels of the cliff sensors are now exposed, allowing for new functionality such as line following.

For details see:

http://code.google.com/p/brown-ros-pkg/

_Trevor

Fast on the heels of the Brown Nao driver, Armin Hornung of Albert-Ludwigs-Unversität Freiburg has announced joy-package compatibility and torso odometry additions for the Nao driver -- as well as the alufr-ros-pkg repository. Armin's announcement is below.

Based on the recently announced Nao driver by Brown University, there is now a regular joystick teleoperation node available. It operates Nao using messages from the "joy" topic, so it should work with any gamepad or joystick in ROS. In addition, the control node running on Nao returns some basic torso odometry estimate. The code is available at http://code.google.com/p/alufr-ros-pkg/ to checkout via SVN. A README with more details can be found in the "nao" stack there.

I'm open for feedback and suggestions!

-- Armin

Brown's Nao driver for ROS

| No Comments | No TrackBacks

Brown University has been hard at work on a developing an Aldebaran Nao driver for ROS and recently announced an open source (GPL) version to the community. The text of Trevor Jay's announcement is below.

Brown University is pleased to release a ROS driver for the Aldebaran Nao. The driver makes available: head control, text-to-speech, basic navigation, and (most interestingly) the Nao's forehead camera. Sample clients are part of the download, including a WiiMote teleop client.

Here are two videos of the driver in action (one from the robot's perspective):

The driver is available at the brown-ros-pkg download page.

If you have any problems or just find the driver useful, please let us know! We will add features as our work and the community need them.

_Trevor

Robots as Students: Towers of Hanoi

| No Comments | No TrackBacks

crossposted from willowgarage.com

Nate Koenig of the Interaction Lab at USC is continuing his work here at Willow Garage after a busy summer. Nate carried out an empirical study investigating the use of people as teachers for robots, while also researching learning by demonstration with PR2.

Participants in Nate's study used learning by demonstration to teach PR2 how to solve the Towers of Hanoi puzzle. As the name implies, learning by demonstration relies on a human teacher to provide a robot "student" with demonstrations of a complex task. In this case, the robot uses the state of the puzzle (i.e., location of red disk compared to blue and green), along with the teacher's command, to learn the demonstrated task. Volunteers used a web-based teaching tool to guide PR2 through the three-disk puzzle board. In one condition, teachers were able to directly see PR2, while in the other condition teachers viewed the robot's actions through a small video feed on the web tool. This manipulation allowed Nate to study if robot visibility affects teaching strategies and outcomes. Based on participants' commands and other observations from the environment, the robot learned how to solve the puzzle on its own.

Results from this study indicate that teachers perform better when visually separated from the robot. Performing "better" means that participants made fewer unnecessary or repetitive moves when teaching the robot. While this may seem counter-intuitive, the teachers who could see the robot were easily distracted from the task and seemed to build inaccurate mental models of PR2's capabilities.

In addition to this experiment, Nate worked on a number of smaller projects including developing the first video streaming ROS node and creating a web-based graphical interface for interacting with PR2. You can find many of these contributions in the hanoi package for ROS. Nate is also creator and lead developer for Gazebo, a popular open-source 3D robot simulator. Gazebo is heavily used at Willow Garage to simulate the actions of PR2, and Nate is providing us with numerous improvements.

Texas Robot

| No Comments | No TrackBacks

Dallas.Screenshot.png

On the Willow Garage Blog, you can find out more about the Texas Robot, which is a telepresence robot built out of leftover PR2 parts and off-the-shelf components. One of the interesting aspects of the Texas robot is that it is running the PR2 software components as-is: motor controllers, tele-operation, visualization, etc... The only changes required were modifying the robot model (urdf) and writing new roslaunch files. You can find these changes in the texas package on ROS.org. Any other changes have been improvements that have been useful for both the Texas and PR2 robots, such as updates to the teleoperation controls.

Hierarchical Planning for PR2

| No Comments | No TrackBacks

crossposted from willowgarage.com

This summer, Jason Wolfe (UC Berkeley) worked on hierarchical task planning for the PR2. Planning is important for performing tasks efficiently. If you're gathering ingredients for the Triple Chocolate Volcanic Explosion Soufflé you're fiendishly craving, you may fetch both the eggs and the butter from the fridge before heading to the pantry for the sugar and chocolate. If you weren't planning ahead, you might go get the eggs, drop them off on the counter, go back to the fridge to get the butter, unload again, continue to the pantry for the chocolate, and so on. You'll still make a soufflé, but your plan will be far from optimal.

Jason's work on hierarchical planning helped the PR2 plan out its tasks in a more logical, premeditated manner. The package takes into consideration high-level decisions, like the order in which to fetch the ingredients, down to low-level decisions like where to position the base of the robot and what angle to grasp an object from. In the video, the PR2 is told to move several bottles between two tables. Instead of manipulating each bottle independently of the others, the robot plans ahead and places the bottle being manipulated near the next bottle to grab. That way, the robot can simply put down the current item and, without driving to the next goal, reach over and pick up its next bottle. Jason's code can be found in the hierarchical_planning package. You may also be interested in Jason's and Bhaskara Marthi's RSS 2009 presentation on Angelic Hierarchical Planning.

In the process of working on this package, Jason spent time developing and debugging lower-level component actions, such as moving the base to a particular location. Interactive scripting languages are ideal for this purpose. Jason developed an experimental Java client library for ROS (rosjava), which he then used to create a ROS interface to Clojure (rosclj), a Lisp-like language built on Java. Using this interface, he then developed a large library of scripted actions which can be used for quickly specifying and tele-operating complex sequences. These can be found in the clj_pr2 package.

crossposted from willowgarage.com

Ben Cohen of University of Pennsylvania has returned to the GRASP Lab after his summer internship here at Willow Garage. At Penn, Ben researches search-based methods for path planning for robotic manipulators. During his time here, he worked on two motion planners: one for door opening and one for manipulation. When compared to the door planner used in Milestone 2, the new door planner uses SBPL (Search-Based Planning Library) to give the PR2 two new capabilities. First, it allows the robot to not only push doors open, but also pull. Second, the door planner allows the robot to open doors, regardless of hinge position -- left or right side of the door. These two novel capabilities allow for robust, more universal door opening.

Additionally, Ben's work on a manipulation planner involved integrating SBPL into the move_arm ROS package, which integrates a variety of motion planners. Ben tested the SBPL planner on the PR2's arms, and added the supporting software needed to perform collision checking. With collision checking in place, SBPL can more readily handle cluttered, complex environments.

Here are Ben's end-of-summer presentation slides discussing his planning work (Download PDF from ROS.org):

Hand Detection and Image Descriptors

| No Comments | No TrackBacks

crossposted from willowgarage.com

This summer, Alex Teichman of Stanford University worked on an image descriptor library, and used this library to develop a new method for people to interact directly with the PR2. Using a keyboard or a joystick is great for directly controlling a robot, but what if an autonomous robot wanders into a group of people -- how can they affect its behavior?

Alex's approach allows the PR2 to "talk to the hand," as many of us cruelly experienced in middle school. In the video, Alex demonstrates that the PR2 can be made to stop and go by simply holding a hand up to the stereo camera. To accomplish this, Alex used a machine-learning algorithm called Boosting along with image descriptors, such as the color of local regions and object edges. He was able to train his algorithm on a data set that was labelled using the Amazon Mechanical Turk library that Alex Sorokin developed. This Mechanical Turk library harnesses the power of paid volunteers on the Internet to perform tasks, like identifying hands in images so that algorithms like Boosting can be trained.

Hand detection is part of a larger effort that Alex Teichman has been working on, developing a library with a common interface to image descriptors. This library, descriptors_2d, enables ROS developers to use easily use descriptors like SURF, HOG, Daisy, and Superpixel Color Histogram.

You can learn more about the hand detection techniques and image descriptors in Alex's final summer presentation (download PDF from ROS.org).

Hand Detection

Hand Detection

Is the Bottle Half Full or Half Empty?

| No Comments | No TrackBacks

crossposted from willowgarage.com

Matt Piccoli (University of Pennsylvania) and Jürgen Sturm (University of Freiburg) did a lot more than break eggs and learn about how things move. They also tested the limits of PR2's fingertip pressure sensors with a seemingly simple task: identifying, without looking, if a juice bottle is open or closed, and full or empty.

Tactile information is invaluable when determining properties of objects that are visually inaccessible. In this vein, Matt and Jürgen developed a tactile perception strategy that can be used to detect the internal state of liquid containers. By measuring a bottle's reaction to a force applied by a gripper, their system can recognize whether the bottle is full or empty, and open or closed. The system learned this information from a set of training experiments carried out on different types of bottles and soda cans. Knowing whether a bottle is open or closed can help a robot determine the level of care required when manipulating the object.

You can find the code for Matt and Jürgen's work in the pr2_gripper_controller package on ROS.org.

Detecting Tabletop Objects

| No Comments | No TrackBacks

crossposted from willowgarage.com

Marius Muja of University of British Columbia began his internship in the middle of Milestone 2 excitement. For several weeks, he worked on two important perception components: detecting outlets from far away, and detecting door handles.

Thereafter, Marius focused on tabletop object detection and wrote the tabletop_objects package. Determining the exact position and orientation of an object, as well as its identity, is very important if a robot is grasping objects, and especially crucial if the object in question is fragile. Tabletop_objects uses a two-stage approach. In the bottom-up stage, initial estimations of possible object locations are made, and in the top-down stage, 3D models are fit into the estimated locations. After fitting the correct 3D model, the object's identity, position and orientation can be determined with high confidence. This approach can even distinguish between similar-looking drinking glasses. Marius worked with Ioan Sucan to integrate tabletop_objects and motion planning (move_arm), and together, they were able to successfully detect, grasp and manipulate fragile glass objects.

In addition to his work with tabletop_objects, Marius integrated FLANN (Fast Library for Nearest Neighbors) into OpenCV, and developed a phone-based teleoperation mode for PR2 based on Asterisk, an open source PBX.

Here are Marius's end-of-summer slides, where you can find more details about his work.

Marius Muja: Tabletop Object Detection (Download PDF from ROS.org)

Indoor Object Detection

| No Comments | No TrackBacks

crossposted from willowgarage.com

This summer, Dan Munoz of Carnegie Mellon University worked on helping the PR2 understand its environment using its 3-D sensors. Improving 3-D perception is important because it can help the PR2 with many tasks such as localization and object grasping. At CMU, Dan and collaborators are developing techniques to improve 3-D perception for an unmanned vehicle in outdoor natural and urban environments. These techniques first take in a cloud of 3-D points, usually collected from a laser scanner, and a label associated with each point. These labels identify such objects as buildings, tree trunks, plants, power lines, and the ground. Then, various local and more global features that describe the local shape and distribution of each object are extracted for each point and region of points. These labeled examples are then used to train an advanced machine learning tool that reasons the best way to combine the local and global features that describe each object. In new environments, this feature extraction process is repeated and given to the machine learning tool to determine what objects are present in the novel scene.

While at Willow Garage, Dan integrated this learning framework into ROS. As shown in the video, Dan experimented with helping the PR2 perceive objects on the room-sized scale, such as tables and chairs, as well as objects at the table-top-sized scale, including mugs and staplers. During the Intern Challenge, Dan also applied this same framework to distinguish between the three different types of bottles being served: Odwalla, Naked, and water. Dan developed the descriptors_3d package, the library used to compute various 3-D features for a point or region of points from a stereo camera or laser scanner. Additionally, he developed the functional_m3n package (Functional Max-Margin Markov Networks), the advanced machine learning tool that learns how to combine low-level and high-level feature information for each object.

Below are Dan's final presentation slides.

Daniel Munoz: Indoor Object Detection on Scribd (Download PDF from ROS.org)

CHOMP Motion Planner

| No Comments | No TrackBacks

cross posted from willowgarage.com

Mrinal Kalakrishnan, one of three motion planning interns here at Willow Garage, is finishing up his summer project and returning to the University of Southern California. Mrinal has been working on a smooth motion planning and control pipeline for the PR2, introducing a new approach to object manipulation The key component of this work was the implementation of CHOMP (Covariant Hamiltonian Optimization and Motion Planning), a motion planner developed at CMU and Intel Research. You can find this implementation in the chomp_motion_planner package for ROS.

Mrinal chose to implement this motion planner on the PR2 because CHOMP's method of planning away from obstacles produces very smooth, natural-looking movements. You can see in the video that the PR2's arm trajectory is rather fluid and avoids unusual or awkward joint angles. The animation shows the arm optimizing the trajectory away from the bookshelf, while maintaining a smooth motion plan. Mrinal's work with CHOMP allowed for informative comparisons to be made with the two other motion planners being researched and implemented here, ompl and sbpl_arm_planner. All three motion planners use the same interface, making switches between the three systems very simple.

In addition to his work with CHOMP, Mrinal wrote the distance_field package for ROS which performs 3-D obstacle inflation to generate a cost-map for arm planners. He also wrote spline_smoother, a library of algorithms which can convert a set of waypoints, as typically generated by motion planners, into a smooth spline trajectory suitable for execution on a robot.

Below is Mrinal's end-of-summer presentation, where you can find additional details about his work here at Willow Garage.

Mrinal Kalakrishnan: CHOMP Motion Planner on Scribd (Download PDF from ROS.org)

rf_detector: 3D Object Recognition

| No Comments | No TrackBacks

crossposted from willowgarage.com

Min Sun is returning to University of Michigan at Ann Arbor, where he does computer vision research with particular interest in 3-D object recognition. During his summer here at Willow Garage, he focused on recognizing table-top objects like mice, mugs, and staplers in an office environment. Min is the the primary creator of rf_detector (Random Forest), which recognizes objects and their poses. The detector uses the stereo camera along with the texture light projector to collect images and the corresponding dense stereo point clouds. From there, rf_detector predicts the object type (i.e. mouse, mug, stapler) and its location and orientation. This information can be crucial to have before attempting object manipulation, as many object types, such as mugs, require careful handling.

In the future, Min will be looking for ways to scale up this approach to a wider range of object classes. Min continues to look for other features and model representations that make object recognition more robust.

Min also wrote the geometric_blur package, which calculates geometric blur descriptors.

Here are the slides from Min's final internship presentation describing his work on rf_detector and the detection pipeline.

Min Sun: 3D Object Detection on Scribd (Download PDF from ROS.org)

Collision Detecting and Arm Planning

| No Comments | No TrackBacks

crossposted from willowgarage.com

Ioan Sucan is headed back to Rice University after his third stay here at Willow Garage. Ioan is a motion planning researcher and is the author of Open Motion Planning Library (OMPL), a library of sampling-based motion planning algorithms. These algorithms are important for the PR2 because they enable the arm to grasp and manipulate objects, while simultaneously avoiding collisions with people and other still or moving objects.

This past winter, Ioan and Radu Rusu used OMPL to do dynamic collision avoidance. This summer, Ioan was able to make many improvements to OMPL so that the PR2 can grasp and manipulate objects in cluttered indoor environments. In the video, you can see how the PR2 is able to grasp objects while moving its arm through complex obstacle courses. Data from the tilting laser scanner is used to construct and update a 3D-representation of the environment, allowing the arm to avoid even moving obstacles.

Ioan also improved the robot_self_filter and the collision_map. When the PR2's arm moves in front of the sensors, two problems occur: the arm looks like an obstacle in the environment, and the arm blocks the robot's view of the environment behind the arm. The robot_self_filter, in combination with the collision_map, allows the PR2 to disregard its arm as an obstacle, and "remember" objects behind the arm.

Here are the slides from Ioan's end-of-summer presentation describing his work on OMPL and other projects.

Ioan Sucan: Motion Planning for the PR2 Arm on Scribd (Download PDF from ROS.org)

Humans Helping Robots See

| No Comments | No TrackBacks

crossposted from willowgarage.com

Alex Sorokin of University of Illinois, Urbana-Champaign has been hacking on many projects this summer, extending work that he started at Willow Garage last year. Alex is the author of the CV Web Annotation Toolkit, which is an open source tool that helps researchers in computer perception collect and classify data, with the help of workers on Amazon's Mechanical Turk. If you're a researcher collecting a data set, you can use this toolkit to submit images to Mechanical Turk and pay people to label them for you. For example, you can instruct Mechanical Turk users to draw boxes around all of the people in an image, or draw polygons around people's hands. These manually-labeled datasets allow researchers to test their own algorithms and compare them against what a person sees. If you're worried about the quality of results that random Turkers might give you, you can easily let an additional set of users grade the results that you get back, and increase the likelihood of accurate responses.

In addition to helping our researchers collect data sets, this toolkit is also very useful for our robots. For just a couple of dollars, you can easily submit images to Mechanical Turk and have users teach the robot where your refrigerator and dishwasher are, what your cups look like, or where your power outlets are. We're a long way from having robots that can identify objects in an environment as well as humans can, but toolkits like Alex's help us to bridge that gap by allowing humans to lend their abilities to robots.

In addition to his work with Mechanical Turk, Alex came up with many great extensions to ROS. These include bagserver, which enables random-access into ROS bag files, and bag_image_view, which lets you visually scan through images in a bag file and play them back.

Below is Alex's end-of-summer presentation, which discusses his various contributions in greater detail.

Alex Sorokin: CV/Mechanical Turk Presentation (Download PDF from ros.org)

University of Freiburg researcher Jürgen Sturm is just finishing his second internship here at Willow Garage. In addition to helping out with our egg-breaking efforts, he's been using the stereo cameras of PR2 to detect planar objects like doors, drawers, books, and boxes. More importantly, he has been tracking the movement of these objects to learn articulation models, i.e. how these objects move. Does the door open to the left or the right? Does the drawer slide in or out? Where will the handle of the drawer be when it is fully open? This is the sort of information that is critical for enabling robots to operate in our own environments.

We've posted a video of Jürgen discussing his findings as well as slides from a presentation that he gave at Willow Garage. You can also download his code from the planar_objects ROS package.

Planar Objects and Articulation Models on Scribd (Download PDF from ros.org)

Awhile back we showed a demo video using an iPod Touch to drive around of a PR2. It was a fun experiment, but it relies on a proof-of-concept that's not quite ready for primetime.

Srećko Jurić-Kavelj of the University of Zagreb showed us that there's more than one way to get data from an iPod Touch into ROS. Instead of cross-compiling ROS onto the iPhone, which is still very difficult, he used the open-source accelerometer-simulator project on Google Code to receive the accelerometer on another computer. He was then able to easily adapt a sample Python script to broadcast that data as a rospy node.

You can see the results here as he drives around a Pioneer 3DX:

Orocos RTT and ROS integrated

| No Comments | No TrackBacks

Integrating ROS with other robot software frameworks, such as OpenRAVE, Player, and Euslisp, has been important to us as it allows developers to leverage the strengths of each. No framework can be the best at everything, so it's important to allow developers to choose the tools they need. This video shows the integration between ROS and the Orocos Real Time Toolkit (RTT), which was done by Orocos developer Ruben Smits during a month-long visit to Willow Garage. Ruben became a vital member of our Milestone 2 team and still managed to have time to build this great, seamless integration.

While ROS has good support for distributing components over the network, RTT offers hard realtime communication between components. This demo combines the best of both worlds: the realtime low level control is handled by a set of RTT components that directly control the the PR2 robot hardware, but the communication between the controllers is visible to the whole ROS network.

The integration makes the RTT components appear as ROS components, without breaking the realtime communication between the RTT components. This allows RTT developers to use tools developed for ROS, such as rviz and rxplot, to visualize the communication between controllers.

-- Wim Meeusen

ROS on the iPod Touch/iPhone

| No Comments | No TrackBacks

Josh Faust and Rob Wheeler recently got ROS working on the iPod Touch/iPhone and put together a quick demo that uses an iPod Touch as a joystick for the PR2 robot. The iPod Touch and iPhone combine a high-quality display with different modes of interaction that make it appealing for robotics interfaces, and they are a platform to test cross-compilation of ROS. The difficult challenge in getting ROS running on the iPod Touch was solving the cross-compilation issues. Once they had those figured out, they were able to add about twenty lines of code to the standard iPod Touch accelerator demo to translate the accelerator input into commands to drive the PR2.

This is still a proof of concept, but we hope in the coming months to make it a stable platform for ROS development. We've put up a ROSPod wiki page so you can track our efforts and contribute your own.

Find this blog and more at planet.ros.org.


Monthly Archives

About this Archive

This page is an archive of recent entries in the packages category.

misc is the previous category.

papers is the next category.

Find recent content on the main index or look in the archives to find all content.