For developers who want to write robotic applications on the Android platform, or who want to extend their robot with new sensors for indoor positioning and 3D perception, Intermodalics created the ROS Streamer App for Tango.
This Android app for Tango compatible devices provides real-time 3D pose estimates using Tango's visual-inertial odometry (VIO) algorithms, camera images and point clouds into the ROS ecosystem. The app and code are freely available for download in the Play Store and Github. More information can be found on the ROS wiki page. The application has been developed in close cooperation with Ekumen and Google.
Intermodalics is committed to maintain and improve the app, so stay tuned for new features and improvements. As an open source project we invite you to propose or contribute new features as well.
Future updates will contain even more Tango features such as area learning (SLAM) and 3D reconstruction.
We hope that this application and code will facilitate the use of Tango devices in robotic applications.
This summer, Risto Kojcev, sponsored by the Google Summer of Code (GSOC) and directed by the Open Source Robotics Foundation (OSRF) and the ROS-Industrial (ROS-I) Consortium developed a user friendly ROS Interface to control and change a manipulator into Cartesian Impedance control mode. The external forces that the robot applies to the environment can also be set with the developed interface.
Our first goal was to create a set of common messages containing the necessary parameters for setting Impedance and Force control. This allows interaction between the ROS ecosystem and the ROS driver of the robot. The messages are created based on the commonly used parameters for Impedance/Force control and discussion with the ROS community. The relevant current set of ROS messages are available in the majorana repository. I would also like to encourage the Robotics community to contribute to this project by sharing their suggestions. I believe that this set of messages could still be more generalized and improved based on community input.
The second goal was to develop a user interface which allows the user to set the necessary parameters for Cartesian Impedance/Force Control and interactively switch between control modes. In this case I have expanded previous GSoC 2014 Project: Cartesian Path Planner Plug-In for MoveIt!. The updated plugin now contains the relevant UI fields for setting Cartesian Impedance and Force Control. Depending on the implementation and the properties of the robot controller, this plugin also allows interactively switching between control modes during runtime.
OLogic has been involved with Project Tango since the very beginning of the project, however we have always had our eyes on the goal of utilizing it for robotics applications. The solution to the problem of indoor localization and mapping is one of several areas Project Tango is focused on, and when you overlap this with robotics, it is a perfect fit. Google has provided several SDK's for working with project Tango in either Java, C, or Unity, and has shown some impressive demos using sparse mapping under Unity, to navigate around 3D virtual worlds, or games on the device. The phone has the ability to perform Visual Inertial Odometery (VIO), and we wanted to extend this to use within the context of ROS. We wrote some ROSJava Nodes that use the SDK to access the VIO to publish pose, transform frames (tf), and odometry messages. This allowed us display a URDF of a floating phone on a map, in RViz and show the position information of where the phone is located in the office, in near-real-time. We have several demo videos of our summer intern, roaming around the office with a Project Tango phone, while we visualize the phone's position and orientation in 3D space. It is just a starting point for all the things we want to do with Project Tango and ROS, but we have a good framework in place to add other nodes into the puzzle, and get to the point soon where we will be able to navigate a robot around the office with only a Project Tango phone for the brains. The project is available via a public project on Github https://github.com/ologic/Tango and all the build instructions for getting it running on a Tango device is there via the Wiki. There are lots of helpful hints and tips on building 3D maps using the Tango Mapper application (the one that Google provides), and then taking those maps and bringing them into ROS to try to navigate a space using an existing ROS robot. We will be adding to the project continually, as it is still definitely a work-in-progress.
We are happy to announce the first release (v0.1) of STDR Simulator (Simple Two Dimensional Robot Simulator) ROS package. It is a fact that a variety of robot simulators is available. Some characteristic examples are the Player/Stage/Gazebo project, USARSim, Webots, V-REP and many many others. We acknowledge that these frameworks are state-of-the-art and provide a vast amount of services, ranging from realistic 3D simulation to hardware support. Though the prize you ought to pay is that they are either extremely architecturally complicated and confuse the novice robotics researcher or they require a lot of computational power to provide realistic 3D simulation. In addition, almost all of the pre-mentioned frameworks have a lot of dependencies that make the installation procedure time consuming and sometimes impossible due to dependency errors. What we envisioned was a simple simulator that its installation wouldn't require more than a few clicks, one that would allow the robotic researcher to materialize their ideas in a simple and efficient manner.
That is why we decided to create STDR Simulator. STDR Simulator's two main goals are:
It doesn't aim to be the most realistic simulator, nor the one with the most functionalities. Our intention is to make a single robot's, or a swarm's simulation as simple as possible, by minimizing the needed actions the researcher has to perform to start theirs experiment. In addition, STDR can function with or without a graphical environment, which allows for experiments to take place even using ssh connections.
STDR Simulator is created in a way that makes it totally ROS compliant. Every robot and sensor emits a ROS transformation (tf) and all the measurements are published in ROS topics. In that way, STDR uses all ROS advantages, aiming at easy usage with the world's most state-of-the-art robotic framework. The ROS compliance also suggests that the Graphical User Interface and the STDR Server can be executed in different machines, as well as that STDR can work together with ROS Rviz!
We hope that STDR Simulator will be useful to the beginner robotics researcher aiming at comprehending the area, as well as to an advanced roboticist who wants to try his ideas in mapping, navigation or path planning.
STDR Simulator is - of course - open source and can be downloaded from our Github page (https://github.com/stdr-simulator-ros-pkg/stdr_simulator) or through the official ROS binary distributables (ros-hydro-stdr-simulator). Since this release is the initial one we expect things to not fully work, so it would perfect if you could provide us with some comments / suggestions / bugs that you discover. Our bug tracker is:
From André Dietrich of Otto-von-Guericke-Universität Magdeburg
Fakultät für Informatik on ros-users@
The aim of this contribution is to connect two previously separated worlds: robotic application development with the Robot Operating System (ROS) and statistical programming with R. This fruitful combination becomes apparent especially in the analysis and visualization of sensory data. We therefore introduce a new language extension for ROS that allows to implement nodes in pure R. All relevant aspects are described in a step-by-step development of a common sensor data transformation node. This includes the reception of raw sensory data via the ROS network, message interpretation, bag-file analysis, transformation and visualization, as well as the transmission of newly generated messages back into the ROS network.
The steered_wheel_base_controller repository contains the steered_wheel_base_controller ROS package, which in turn contains SteeredWheelBaseController, a base controller for mobile robots. The controller works with bases that have two or more independently-steerable driven wheels and zero or more omnidirectional passive wheels (e.g. swivel casters).
Clearpath Robotics and Kinova Robotics have just released the first ever ROS package for the JACO Robot Arm, with assistance from Worcester Polytechnic Institue's NASA Sample Return team. The package exposes all of the functionality of the arm to ROS, so feedback from the arm is available to be published to topics inside of ROS.
Up until now, JACO Robot Arm has mainly been used as an assistive device, rather than a manipulator for research and development initiatives. However, with Clearpath's new partnership with Kinova, the JACO Robot Arm is finding new territory in research applications including aerospace and mining.
Previously, the arm could only be controlled manually or through a separate computer running Windows. Now the ROS driver, which is designed exclusively for the JACO Robot Arm, integrates the hardware and software into a single system, creating an easy-to-use and time-efficient process. For those who purchase the arm from Clearpath Robotics, it will come fully-loaded with a launch file (included in the driver), which will initialize communications with the arm and prepare it to accept commands.
The JACO Robot Arm is unique for ROS users because it is well priced and it's delivered as a complete, all-in-one package (so, no more messing around with separate hardware and software systems - customers get both, right out of the box!). Not to mention, it is one of the best looking manipulators on the market.
JACO Robot Arm is a commercial-quality, accessible robot arm that is now available to ROS users. To download the first ROS interface that works with JACO Robot Arm, go to: http://www.ros.org/wiki/jaco
Iwaki interaction manager is currently used to manage both conversational (verbal and non-verbal) and task-related interactions for two social robots at Carnegie Mellon University. The task-related interactions include: - information retrieval requests for a new version of Hala, the bilingual robot receptionist at CMU Qatar, - table-game playing interactions for an upcoming robot at CMU.
Iwaki includes a production system, inspired by COLLAGEN's plan tree [Rich and Sidner, 1998], that interprets rules (recipes) stored in XML. The production system approach allows creating interactions with aspects of finite-state, frame/script, and information-state-based dialogue management. The distribution includes examples and a manual (under development).
Key features: - Interactions are fully described by XML recipes. - Flexible styles of dialogue management (form, script, information-state) - Soft real-time, time-sensitive actions. - Developed to be well-suited for multi-party interactions. - Does not include resource management, for now. - Can be used either as a C++ library or as a standalone process.
-robotnik-powerball-ros-pkg: A repository that includes the necessary files for the Schunk Powerball simulation. We can control it sending commands directly to the controller topic or via a PS3 pad. Currently it supports Cartesian/Euler operation. (http://code.google.com/p/robotnik-powerball-ros-pkg/)
Furthermore, we have updated our Guardian repository (http://code.google.com/p/guardian-ros-pkg/) to be able to simulate the robot in ROS Fuerte version. We have included the URDF and the launch files to represent our new mobile manipulators, the GBALL (composed by a Guardian integrating the two previous packages). GWAM robot (a Guardian integrating the Barret WAM arm) is already available and will be uploaded soon.
Please add the Google Code repositories to the index.
We will be doing a groovy release from the existing driver (basically the same as fuerte). I plan to merge changes we have made at SwRI into the master/trunk. Some of these are improvements to the driver itself, as well as some arm navigation work.
As with our other packages, the trunk/master will be unstable development for groovy and the branch(released) version will be stable.
As always we are interested in submission and bug fixes from the community. If anybody is interested in helping develop this stack further, please let me know.
From Marc Freese of Coppelia Robotics on ROS Users
Dear ROS community,
We are happy to announce that the V-REP robot simulator, that includes an extensive and powerful ROS interface, is now open source. As of now, it is also fully free and without any limitation for students, teachers, professors, schools and universities. No registration required. Moreover, V-REP is now available for customization and sub-licensing.
V-REP is the Swiss army knife among robot simulators: you won't find a simulator with more features and functions, or a more elaborate API:
- Cross-platform: Windows, Mac OSX and Linux (32 & 64 bit) - Open source: full source code downloadable and compilable. Precompiled binaries also available for each platform - 6 programming approaches: embedded scripts, plugins, add-ons, ROS nodes, remote API clients, or custom solutions - 6 programming languages: C/C++, Python, Java, Lua, Matlab, and Urbi - API: more than 400 different functions - ROS: >100 services, >30 publisher types, >25 subscriber types, extendable - Importers/exporters: URDF, COLLADA, DXF, OBJ, 3DS, STL - 2 Physics engines: ODE and Bullet - Kinematic solver: IK and FK for ANY mechanism, can also be embedded on your robot - Interference detection: calculations between ANY meshes. Very fast - Minimum distance calculation: calculations between ANY meshes. Very fast - Path planning: holonomic in 2-6 dimensions and non-holonomic for car-like vehicles - Vision sensors: includes built-in image processing, fully extendable - Proximity sensors: very realistic and fast (minimum distance within a detection volume) - User interfaces: built-in, fully customizable (editor included) - Robot motion library: fully integrated Reflexxes Motion Library type 4 - Data recording and visualisation: time graphs, X/Y graphs or 3D curves - Shape edit modes: includes a semi-automatic primitive shape extraction method - Dynamic particles: simulation of water- or air-jets - Model browser: includes drag-and-drop functionality, also during simulation - Other: Multi-level undo/redo, movie recorder, convex decomposition, simulation of paint, exhaustive documentation, etc.
I'd like to announce the differential_drive package. This provides some of the low level nodes needed to interface a differential drive robot to the navigation stack. I think this will be especially useful for beginning hobby roboticists like myself, and it provides the following nodes:
diff_tf - Provides the base_link transform.
pid_velocity - A basic PID controller with a velocity target.
twist_to_motors - Translates a twist to two motor velocity targets
virtual_joystick - A small GUI to control the robot.
This repository currently contains a ROS package for interfacing to the soon-to-be-released PX4Flow optical flow board (coming soon from 3D Robotics). We have plans to add a ROS interface to the PX4FMU autopilot, and software for MAV autonomy as showcased in our recent IROS paper (http://www.cvg.ethz.ch/MAV ) in the coming weeks ahead.
I would like to submit a new repository. Since the current ROS joystick drivers for the PS3 controller don't support OSX, I was able to use glfw ( a cross platform library) to build a node that publishes the sensor_msgs/Joy message.
I also documented how to connect the controller to OSX 10.8 via bluetooth. Welcome any suggestions or comments.
A new package from Paul Bouchier
A new package: rosserial_embeddedlinux, that's part of the rosserial stack
and gives embedded linux systems the ability to run ROS nodes is now
With the rosserial_embeddedlinux package, you can use ROS with Linux systems
that don't or can't run full-blown ROS. The package provides a ROS
communication protocol that works over your embedded linux system's wifi or
network connection (or its serial port) and communicates with a ROS message
proxy running on a native ROS system. It allows your embedded linux system
to run apps that are close to full fledged ROS nodes that can publish and
subscribe to ROS topics, provide services, and get the ROS system time over
any of the supported connection types.
Rosserial_embeddedlinux extends the rosserial_arduino code that enabled
arduino to present a ros node. It supports multiple nodes.
The repo holds a v4l camera driver (based on the luvcview program) with a dynamic reconfigure interface. Including reading and storing parameters, like brightness, contrast, ... and even focus if the camera supports it.
A tiny program generates the dynamic reconfigure configuration file CameraParameters.cfg by testing all supported controls.
But still a lot of things can be enhanced, therefore I am happy to find other developers.
The current version contains packages to interact with the AISoy1 robot using its original API through ROS. Further developments are already in preparation: e.g. simulation of AISoy1 in Gazebo, packages for the mobile platform (botmovil) used by AISoy1 to let him move around and its simulation model, and many other.
We will publish them very soon, so stay tuned for more!
we, the Robotics and Biology Laboratory (RBO) at TU Berlin, are currently preparing the public release of our robotic software. Therefore, we would like to kindly request to index our new ROS package repository at
The repository already contains the stack "iap", consisting of our Interactive Perception library.
It contains modules for feature tracking, image segmentation and detection of kinematic structures.
We would like to announce the ROS repository of the Delft Robotics Institute. It currently contains stacks for extremum seeking control and saliency detection, as well as some miscellaneous packages. Expect more soon!
Included are a few Gazebo launch files for viewing the robot alone, in the International Space Station (ISS) US-Lab module, or with the "ISS Task Board," which is a structure with a number of articulate buttons and switches that the robot can manipulate. Also, there is an interactive marker script that can be used to teleop the robot.
Of course, this is a beta version, and we hope to make improvements over time. But I invite anyone who is so interested to take a look and see what they can do!
Any feedback, even if you just have downloaded it and liked it, is welcome.
The IPv6 features of the stack are enabled by setting the environment variable ROS_IPv6 to 'on'. The master then starts to listen on IPv4 and IPv6 addresses and nodes try to connect using IPv6. Nodes using IPv4 will still be able to connect to servers and publishers using IPv6.
IPv6 capable subscribers are currently not able to connect to publishers using IPv4, but is on our TODO list. (Trying to connect to all available addresses, until a successful connection can be made)
To use the stack a few steps have to be performed (it is assumed that you build ROS from source):
1) Replace your 'ros_comm/' directory in 'ros-underlay/' with
the 'ros_comm6/' directory from the repository.
2) Remove your ROS installation from '/opt/ros/fuerte'. The reason for
this is that the includes from the original ros_comm stack
confuse the build system.
3) Build and install ROS again.
4) Check '/etc/hosts'. Make sure that 'localhost' and your hostname also
point to '::1'.
5) Add 'export ROS_IPv6=on' to your .bashrc or other startup scripts.
The code is currently tested using Linux. Your millage on Windows or other Unix distributions might vary. This especially affects the dual stack behavior. (The IPV6_V6ONLY socket option is currently not modified)
We have tested the stack in a small test environment at our lab. Yet most of the code is probably untested, as it handles a lot of corner cases and configuration parameters. If you run into any problems using the stack please contact us or send us a patch.
We're happy to announce a new package that uses a robot's previous
experience to plan paths faster than planning-from-scratch alone.
The package is called LightningROS, and it is an implementation of the
Lightning Path Planning Framework described in this paper:
A Robot Path Planning Framework that Learns from Experience
Dmitry Berenson, Pieter Abbeel, and Ken Goldberg
IEEE International Conference on Robotics and Automation (ICRA), May, 2012.
This package uses OMPL planners to implement each component in
lightning and can be called the same way as any other OMPL planner.
The geometric relations semantics software (C++) implements the
geometric relation semantics theory, hereby offering support for
semantic checks for your rigid body relations calculations. This will
avoid commonly made errors, and hence reduce application and,
especially, system integration development time considerably. The
proposed software is to our knowledge the first to offer a semantic
interface for geometric operation software libraries.
The goal of the software is to provide semantic checking for
calculations with geometric relations between rigid bodies on top of
existing geometric libraries, which are only working on specific
coordinate representations. Since there are already a lot of libraries
with good support for geometric calculations on specific coordinate
representations (The Orocos Kinematics and Dynamics library, the ROS
geometry library, boost, ...) we do not want to design yet another
library but rather will extend these existing geometric libraries with
semantic support. The effort to extend an existing geometric library
with semantic support is very limited: it boils down to the
implementation of about six function template specializations.
The software already includes orocos typekits and already supports the
KDL geometry types and ROS geometry types.
Furthermore, it is fully Orocos and ROS compatible.
I'm a student at UT Dallas, and we have a ROS driver for the AR.Drone. It is an early version, and it has almost the same functionality as the ardrone_brown package. The main difference and advantage is that it is written using rosjava and javadrone.
This means that no matter how broken the official SDK is on current or future versions of Linux, we could always have a ROS driver as long as a Java Virtual Machine is present. Moreover, if somebody has the skills this could be ported to Windows or Android, since it does not use native code at all.
From Hai Nguyen of the Healthcare Robotics Lab @ Georgia Tech
Hello ROS community,
I would like to announce the result of our work here at Georgia Tech
in collaboration with Willow. This is the first release of RCommander
(version 0.5), a visual framework for easy construction of SMACH state
machines allowing users to interactively construct, tweak, execute,
load and save state machines. There are two stacks. The
rcommander_pr2 stack contains an implementation with basic states for
controlling the PR2 robot. rcommander_core contains the framework's
essentials allowing the construction of custom RCommander interfaces
for robots other than the PR2. The wiki doc links below also has a
few tutorials for getting started with either rcommander_pr2 or
Just some notes: I've tested this on ROS Electric and have not done
much with Fuerte yet but it will be supported soon. The wiki docs
point to an older Mercurial repository, it should point to the newer
git repository when the Ros indexer gets updated.
This stack contains a single package: roscs, which provides C# wrappers for ROS. It does not support the complete ROS functionality, but the parts we deemed most important, namely publish / subscribe, service calls, limited support for parameters and some minor functionality. We are using this under Linux with mono, Windows is not tested and will probably not work.
This stack contains some useful utility libraries, namely:
cstf, which wraps some tf methods in C#,
Castor, a utility library for reading and writing configuration files from C++ and C#,
udp_proxy_generator - Generates multicast proxies for ros topics. It is a very simple approach to a multi master environment, no namespace or topic remappings are done, messages are simply relayed. Given a configuration file, which specifies topics and message types, C++ code for a proxy is generated and compiled.
This stack holds ALICA, a framework to coordinate and control multiple robots. It consists of three packages:
Planmodeller - an Eclipse-based IDE to model multi-robot behaviour.
AlicaEngine - an execution layer for the designed programs.
AlicaClient - a simple monitoring GUI.
At its core, ALICA, similar to SMACH, uses hierarchies of state-automata to define behaviour. In contrast to SMACH, it is geared at teams of robots, and features task and role allocation algorithms and coordinated constraint solving and optimisation facilities.
This is part of an ongoing effort to make the source code of the RoboCup Mid-Size Team Carpe Noctem publicly available. All this software is used on our MSL robots. Documentation will be added to the wiki once indexed.
This video: http://www.youtube.com/watch?v=HhIrhU19PG4 shows the software in action during the Dutch Open 2012 tournament.
Distributed Systems Group
University of Kassel
From Kyle Maroney at Barrett Technology
In advance of ICRA 2012 and the first annual ROSCon, Barrett Technology is happy to announce a ROS repository created and maintained by Barrett Technology for control of the WAM Arm and BH8-280 BarrettHand. Barrett Technology's ROS repository is an abstraction of Libbarrett, a real-time controls library written in C++.
We look forward to contributing to the ROS community, as well as supporting current and future customers.
For the past few months I've been developing a Python scriptable GUI for ROS that can substitute for RViz, but the main focus was to create something that would allow a developer to rapidly craft a user interface for a nontechnical user. It's still very early days and I'd hazard to call the repository even alpha yet, however anyone interested in having a look can find it at
Key features are that ROS nodes can submit scripts to the visualizer to associate them with standard rViz markers. Scripts can be made to execute when the marker is interacted with in a number of ways. The menu environment of the visualizer is written entirely in PyQT and interacts with a C++ core through published getter-setter functions and callbacks.
I'll be maturing the program over the next few weeks and adding a few tutorials. Comments/suggestions/criticisms would be appreciated...
From Todor Stoyanov of Mobile Robotics and Olfaction Lab at the
Center for Applied Autonomous Sensor Systems (AASS) at Ã–rebro
We are pleased to announce the release of a new ROS packages source code
repository! The repository contains several packages for perception and
grasping, developed at the Mobile Robotics and Olfaction Lab at the
Center for Applied Autonomous Sensor Systems (AASS) at Ã–rebro
University, Sweden. The source code is available at
with some rudimentary documentation at the wiki page. Future releases
are going to also include packages for artificial olfaction, so stay
The released packages contain source code for some of our publications,
in particular Three-Dimensional Normal Distributions Transform (3D-NDT)
pointcloud registration and Independent Contact Regions(ICR) computation
for multi-finger grasping.
In case you are going to attend the ICRA 2012 conference, you can get
first hand information on several of the packages by attending the
RGB-D registration: SPME workshop, Mon. 11:20
ICR on noisy real-world data: TuC210.3
3D-NDT registration: ThD06.1
ICR on a patch contact model: ThB02.2
Don't hesitate to contact us with comments or questions about the
My name is Ganesh P Kumar, and I'm a student at Autonomous System Technologies Research & Integration Laboratory (ASTRIL), at Arizona State University, USA.
This to announce the phspline_trajectory_planner ROS stack, developed by my advisor, Dr. Srikanth Saripalli and myself. This stack extends the navigation_experimental stack to make goal_passer follow a path generated by a certain spline called PH Spline.
These packages are from Stefan Schaal's CLMC lab at the University of
Southern California. It currently includes stacks/packages for the
Dynamic Movement Primitives
STOMP motion planning
PI^2: path integral reinforcement learning
Xenomai-compatible version of rosrt
Generic inverse kinematics with constraints
Various other random utility packages
Many of these packages also exist in the Willow repositories, but the
ones here are usually newer. Disclaimer: we tend to run a little
behind on upgrading packages to newer ROS releases. Please email the
package maintainer if you have trouble getting something to work.
This bitbucket repository is a read-only hg mirror of the actual git
repository where we develop code
https://github.com/usc-clmc/usc-clmc-ros-pkg. The mirror exists so
that we can continue to use git and still be able to have all our
stacks in a single repository. Please let me know if there's a better
way to deal with this.
It contains packages useful for doing the assignments and such from
the HacDC robotics course material located here:
http://wiki.hacdc.org/index.php/RoboticsClass2011. It's been a
while since I last announced a repository, so I'm not sure what info
is required in an announcement these days. If the repository crew
needs more info than the link above, just let me know.
My personal favorite package from this repository is floating_faces
-- a set of cubes suitable for use in gazebo that are texture mapped
with faces of students from the robotics class. These cubes are
useful for experimenting with the opencv face tracking while using
Announcement by Stephan Wirth of University of the Balearic Islands to ros-users
Dear ROS users,
I am happy to announce our new public repository for ROS software
developed and/or maintained by the Systems, Robotics, and Vision Group
of the University of the Balearic Islands, Spain.
You can find the top-level rosinstall file here:
The first stack we would like to share with the community is
srv_vision which contains a ROS package for libviso2, a library for
visual odometry (mono and stereo) developed by Andreas Geier from the
Karlsruhe Institute of Technology, Germany.
You can find the package here (or install it using the rosinstall file above):
We are currently participating in two EU-funded projects (aerial &
underwater robotics) , both using ROS. We expect to release more of
our software in the near future and are looking forward to your
Announcement by Juan Antonio BreÃ±a Moral to ros-users
my colleague Lawrie Griffiths and me, Juan Antonio BreÃ±a Moral, are developing a new software to bind a Lego Mindstorms NXT with ROS using the Open Source Project, LeJOS, A Java Virtual Machine for Lego Mindstorms. LeJOS has a rich API based on Java to build Robots with NXT: http://lejos.sourceforge.net/nxt/nxj/api/
This development is an alternative for the current support for NXT with ROS. The main difference between nxt_ros and nxt_lejos is the technology used to connect with a NXT brick. In this case we use LeJOS with ROSJava working together. Besides, we are testing other projects as JavaCV.
Our ROS development is located in the following URL:
I'd like to let you know about a teleop stack I've developed to allow generic tele-operation "source" devices (such as keyboards, joysticks, etc.) to be used interchangeably to control generic tele-operation "sink" devices (such as robot bases, pan-tilt units, robot arms, etc.).
I know there are a number of tele-operation and joystick stacks and packages out there, but many of them are doing nearly identical things in slightly (and frustratingly) different ways. The purpose of this stack is to avoid the rewriting of source or sink specific code for different source-sink combinations, and to provide a common interface for a variety of tele-operation sources.
I am announcing the release of ros_rt_wmp.
The ros_rt_wmp is a ROS node capable of replicating whatever ROS topic or service in another computer wirelessly connected with the source without the need of sharing the same roscore.
As an example consider a team of robots building cooperatively a map. The robots have to exchange their laser and pose information. However if the network that connect the robots is not completely connected and they can't use an infrastructure network (an outdoor chain network for example) there is no way to share data among robots using the ROS.
The ros_rt_wmp allows to distribute/decentralize a complex robotics system in multiple computation units in a transparent form: the only requisite is to know which data from other robots we need in each one of them.
ROS-Industrial is a BSD-licensed ROS stack that contains libraries, tools and drivers for industrial hardware. The goals of ROS-Industrial are to:
Create a community supported by industrial robotics researchers and professionals
Provide a one-stop location for industry-related ROS applications
Develop robust and reliable software that meets the needs of industrial applications
Combine the relative strengths of ROS with existing industrial technologies (i.e. combining ROS high-level functionality with the low-level reliability and safety of industrial robot controllers).
Create standard interfaces to stimulate "hardware-agnostic" software development (using standardized ROS messages)
Provide an easy path to apply cutting-edge research in industrial applications, using a common ROS architecture
Provide simple, easy-to-use, well-documented APIs
ROS-Industrial is at pre-1.0-release level. It currently supports ROS control (arm navigation with collision free path planning) for the Motoman SIA10D and DX100 controller. The software works with actual hardware or a simulated robot in rviz.
The ManyEars project propose a robust sound source localization and tracking method using an array of eight microphones. The method is based on a frequency-domain implementation of a steered beamformer along with a particle filter-based tracking algorithm. Tests on a mobile robot show that the algorithm can localize and track in real-time multiple moving sources of different types over a range of 7 meters. These new capabilities allow robots to interact using more natural means with people in real life settings.
ROSOSC is a set of utilities and nodes for interacting with Open Sound Control hardware and software devices.
One of the main features is the ability to interact with TouchOSC (created by hexler: http://hexler.net), the iOS application, to create dynamic, touch-interactive, control surfaces that can be used with ROS. These control surfaces can be composed of several different types of controls, such as push buttons, toggle buttons, faders, rotary knobs, labels and LEDs. Most of the controls support two-way communication with ROS, which allows users to change color, position, size, and visibility of all of the controls on the page via ROS topics.
There are two main ways of interacting with TouchOSC with ROS:
Using a "default handler" - Simply create a layout file in the freely available TouchOSC editor, and then launch the ROS touchosc_bridge node. All of the controls on the page will show up as topics that can be published to/subscribed to using ROS.
Using a "tabpage handler" - Users can also create a python module that can directly interface with OSC clients. There are many features available to developers, including multiple client support, client join/quit callbacks, and client tabpage switching callbacks. More can be found out on the wiki and API documents.
API Docs are available on the ROS wiki
Two tabpage handlers are included out of the box:
Diagnostics_handler - a tabpage for viewing diagnostics and aggregate diagnostics data
teleop_handler - a tabpage for broadcasting command velocity messages to holonomic and differential drive robots.
I hope that you find this useful in your robotics projects, and I'm excited to see some of the future uses of the TouchOSC and Open Sound Control interfaces.
To get an idea of the basic features, I have made some YouTube videos:
Announcement from Adolfo RodrÃguez Tsouroukdissian to ros-users
We are proud to announce the initial release of PAL Robotics' ROS packages. An overview of the available stacks and packages can be found here: http://www.ros.org/wiki/pal-ros-pkg
On the one hand, we are making available the virtual model of our humanoid service robot REEM, so you can test your algorithms on REEM, in the context of a simulated environment. On the other hand, we are making available a motion retargeting stack that allows to perform unilateral teleoperation of REEM from live motion capture data. The latter was developed by our intern Marcus Liebhardt, in the context of his M.Sc. diploma.
Further developments of these packages are already in preparation, so stay tuned for more!.
we'd like to announce tu-darmstadt-ros-pkg, a repository
providing ROS compatible software developed at TU Darmstadt. From the
start, we provide packages developed in the scope of team HECTOR
Darmstadt related to SLAM
and object tracking in harsh environments such as those encountered in
the simulated Urban Search and Rescue (USAR) environments of the
RoboCup Rescue League. This is the SLAM system we used to score top
places at various competitions (1st place overall RoboCup German 2011,
close 2nd best in class autonomy RoboCup 2011, 3rd place SICK robot
day 2010 etc.). Example videos of hectormapping from the hectorslam
stack used in a handheld mapping system can be seen here:
hector_mapping is a fast SLAM system that does not require any
odometry information and is able to learn accurate grid maps of small
and medium scale scenarios. It can be used interchangeably with
gmapping. The system provides 2D pose estimates at 40Hz (with a Hokuyo
UTM-30LX) but does not perform explicit loop closure like gmapping
hector_trajectory_server saves tf based trajectories given a source
and target frame. They are made available as a nav_msgs/Path using
both a service and topic. The travelled path of a robot can thus
easily be visualized in rviz as well as plotted into the Geotiff
generated by the hector_geotiff node.
hector_geotiff generates RoboCup Rescue League rules compliant
GeoTiff maps with georeference information, showing both the map and
the robot path. It uses nav_msgs/OccupancyGrid and nav_msgs/Path
messages retrieved via services, so it can also be used with gmapping
and other mapping systems.
hector_map_tools provides some tools related to extracting
information from nav_msgs/OccupancyGrid data, like retrieving the
rectangular of the map that actually contains non-unknown data.
object_tracker provides a probabilistic (gaussian representation)
system for tracking and mapping the pose of objects of interest in the
world (used for victim mapping in RoboCup Rescue).
world_model_msgs provides a ROS message based interface for updating
bfl_eigen is a patched version of BFL that uses Eigen.
hector_marker_drawing is a class for helping with publishing marker messages.
vrmagic_camera stack: Driver for vrmagic four sensor cameras, unstable development code.
Documentation and Tutorials will be added in the coming days once the repository is indexed on ros.org.
on behalf of all members of team HECTOR Darmstadt,
Stefan Kohlbrecher & Johannes Meyer
It is intended to be a building block that can be used to conveniently provide c++/java/ros api for handling zero-configuration services in a ros framework.
Right now it has a fairly complete linux avahi implementation and some testing code for java/android, but we'd like to also include bonjour and embedded implementations for completeness. Before going any further though, there are various open design issues that would ideally be best served by gathering interested parties to seek a consensus. Once the wiki pages are up with some info so people can do some browsing/testing, we'll invite people for an official api review to help refine the development process.
ros_pandora_generic contains packages that are platform-independent. Packages include utilities such as a remote watchdog timer, a remote mutex/counter. We have also implemented a wrap-around the Google Mocking framework and created mock objects for Subscribers/ Service Servers/ ActionLib Servers to be used for testing (such as range tests). There are also a number of other utility classes for testing. Finally, we include an EPOS Gateway implementation, that we have used for our motors.
ros_pandora_platform_specific contains packages that are specific to our PANDORA robot, build for the RoboCup Rescue 2011 Competition. More specifically, there are currently two packages using our testing utilities to test interfaces and perform range tests and one package implementing our Qt GUI.
We will continue to update our repository with new packages throughout the next months.
Three more repositories were announced over the weekend:
LASA-ros-pkg: ROS node for position control of the Barrett WAM from EPFL-LASA.
roblab-whge-ros-pkg: The first contributed project is the ROSScan3D stack which creates 3D point clouds with semantic information like floors, walls, celling, room dimensions and the text of the doorplates found in this scan area. The 3D scan is created by a mobile robot, in this case a Roomba from iRobot, a Sick laser scanner LMS 100 and a Canon standard high resolution digital camera.
TYROS: The first published source there is TIChronos/src/ti_chronos_joy.cpp, which
turns a Texas Instruments Chronos watch into a ROS-compatible joystick publisher. The included launch file works with turtlesim demo.
Progress towards full windows support for ROS continues. This should help provide critical support for making personal robotics more accessible to consumers running Windows. Congrats to Daniel
and everyone else who has contributed on getting things like roscore running, your efforts are appreciated. Below is the official announcement.
I would like to announce the availability of a simple driver for the Neato Robotics XV-11 for ROS. The neato_robot stack contains a neato_driver (generic python based driver) and neato_node package. The neato_node subscribes to a standard cmd_vel (geometry_msgs/Twist) topic to control the base, and publishes laser scans from the robot, as well as odometry. The neato_slam package contains our current move_base launch and configuration files (still needs some work).
I've uploaded two videos thus far showing the Neato:
We have developed in-house drivers for EPOS/ELMO controllers, which are
based on the CanOpen/CiA DSP 402 protocol.
These drivers are written in C, and we are currently in the process of
re-factoring these libraries in C++, with the goal of doing a clean ROS
driver afterwards. Initially, the library will contain
implementations of subsets of both CanOpen and CiA DSP 402, with quirks
for EPOS and ELMO. On the longer run, we will probably split the CanOpen
part into another library and add more complete CiA DSP 402 support,
I guess that this might be of interest for the ROS user community. We
are committed to an open-development model and so contributions are very
The repository consists of a stack suitable for the Roboard, and another stack specialized for small joint based robots.
The hobby community seems to be reinventing the wheel with each person that combines an embedded PC with one of these humanoid robots. When the beginner tries to do this it's too daunting, and for others it is very time consuming. So I hope to alleviate this, and get some help back too.
Here's a summary of some of the features:
Pose the robot based on definitions in an XML file
Execute motions by running a series of timed poses (XML)
Stabilization via gyro data
Definition of a KHR style robot linkage for 3D virtual modeling and servo control (URDF)
Calibrate trim of robot with GUI
Calibrate gyro stabilization with GUI
Import poses and trim (not motions) from Kondo's Heart2Heart RCB files
Control robot remotely over network with keyboard
Control robot with PS3 controller over bluetooth
Support for HMC6343 compass/tilt sensor
Support for Kondo gyro sensors
Stereo video capture and processing into point cloud
CPU heavy tasks (such as stereo processing) can be executed on remote computer
Controls Kondo PWM servos
Here's some missing parts (maybe others would like to contribute here?)
Control Kondo serial servos
GUI for editing and running poses/motions
Tool to capture poses
More sophisticated motion scripting
GUI for calibration of A/D inputs
My next goals for this project are to incorporate navigation, and arm/gripper trajectory planning.
I'd like to share a project I've been working on with the ROS community.
Some may be familiar with the Parrot AR.Drone: an inexpensive quadrotor helicopter that came out in September. My lab got one, but I was pretty disappointed that it didn't have ROS support out of the box. It does have potential, though, with 2 cameras and a full IMU, so it seemed like a worthwhile endeavor to create a ROS interface for it.
So, I would like to announce the first public release of the ROS interface for the AR.Drone. Currently, it allows control of the AR.Drone using a geometry_msgs/Twist message, and I'm working on getting the video feed, IMU data and other relevant state information published as well. Unfortunately, the documentation on how the Drone transmits it's state information is a bit sparse, so getting at the video (anyone with experience converting H.263 to a sensor_msgs/Image, get in touch!) and IMU data are taking more time than I'd hoped, but it's coming along. Keep an eye on the ardrone stack, it will be updated as new features are added.
For now, anyone hoping to control their AR.Drone using ROS, this is the package for you! Either send a Twist from your own code, or use the included ardrone_teleop package for manual control.
You can find the ardrone_driver and ardrone_teleop packages on the experimental-ardrone branch of siue-ros-pkg, which itself never had a proper public release. This repository represents the Mobile Robotics Lab at SIUE, and contains a few utility nodes I have developed for some of our past projects, with more packages staged for addition to the repository once we have time to document them properly for a formal release.
Daniel Stonier from Yujin Robot has been bringing up an embedded
project for ROS, eros. Below is his announcement to ros-users
Lets bring down ROS! ...to the embedded level.
Firstly, my apologies - couldn't resist the pun.
This is targeted at anyone who is either working with a fully cross-compiled ros or simply using it as a convenient build environment to do embedded programming with toolchains.
Some of you might remember myself sending out an email to the list about getting together on collaborating for the ROS at the embedded level rather than having us all flying solo all the time. Since then, I'm happy to say, Willow has generously offered us space on their server to create a repository for supporting embedded/cross-compiling development which has now been kick-started with a relatively small, but convenient framework that we've been using and testing at Yujin Robot for a while. The lads there have been excellent guinea pigs, particularly since most of them were very new to linux and had absolutely no or little experience in cross-compiling.
Some build recipes for embedded versions of packages, e.g. opencv.
Some tutorials, e.g. instructions for doing a partial cross or a full cross of the ros.
...and various other things
If you want to take the tools for a test run, simply svn eros into your stacks directory of your ros install. e.g.
svn co https://code.ros.org/svn/eros/trunk ./eros
But, what would be great at this juncture would be to have other embedded beards jump on board and get involved.
Tutorials on the wiki - platform howtos, system building notes...
General discussion on the eros forums.
Feedback on the current set of tools.
New toolchain/platform modules.
If you'd like to get involved, create an account on the wiki/project server and send me an email (email@example.com).
The goals page outlines where I've been thinking of taking eros, but of course this is not fixed and as its early, very open to new ideas. However, two big components I'd like to address in the future include:
Embedded package installer - a package+dependency chain (aka rosmake) installer. This is a bit different to Willow's planned stack installer, but will need to co-exist alongside it and should use as much of its functionality as possible.
Abstracted System Builder as an Os - hooking in something like OpenEmbedded as an abstracted OS that can work with rosdeps.
and of course, making the eros wiki alot more replete with embedded knowledge.
Many of those tutorials and projects came together in the video above: Kitemas LV1. Kitemas LV1 is a fun drink ordering robot that lets you order a drink and then pours it for you. Judging from previous posts, it looks like Kitemas is using a Roomba with Hokuyo laser range finder for autonomous navigation, as well as a USB web camera. Drink selection can be done either through colored coasters or a Twitter API, and the robot can be driven manually with a PS3 joystick.
Here's a software diagram that shows the various ROS nodes working together:
OTL has also created otl-ros-pkg, so readers of his blog can get code samples for his various tutorials and even see code for robots like Kitemas above. You can watch a video with a more dressed up version of Kitemas LV1 here.
Brown is pleased to announce our beta version of rosjs, a light-weight
rosjs is designed to enable users and developers to use the
functionality of ROS through standard web browsers. Applications
developers can leverage all of the power of HTML to build engaging
applications and interfaces for robots as quickly as possible without
recompiling ROS nodes. Additionally, users can access and run
ROS-based applications from standard browsers without the need for any
tied to any particular web-server or framework; it even works when
served locally. Using websockets, latency is low enough for
teleoperation or closed loop control. For example, the following
video shows a user teleoperating the PR2 via rosjs from Providence to
rosjs is currently available for download from the brown-ros-pkg
Institute of Systems and Robotics at the University of Coimbra in Portugal has created the isr-uc-ros-pkg repository, which has a collection of packages with a BSD license. In addition to support for the iRobot Rooma, they have contributed a wifi_discovery_node. The wifi_discovery_node uses the experimental foreign_relay package to enable multi-robot communication over wireless. This node is available as a separate download, or you can checkout the entire repository.
announcement from the CCNY Robotics lab on ros-users
Dear ROS community,
The Robotics Lab at the City College of New York is releasing a collection of ROS tools that we are developing for our research. The
packages, which are BoxTurtle-compatible, are grouped in the ccny-ros-pkg stack. Documentation is available on the ROS wiki.
We have made an effort to include a screenshot and youtube video demonstrating the usage of each single package on the corresponding
wiki page. The packages also come with a pre-recorded bag and demo launch files, allowing developers to quickly test and get started with using the tools.
artoolkit - a meta-package which downloads and installs ARToolkit locally.
ar_pose - an ROS wrapper for ARToolkit, capable of tracking the position of single or multiple AR markers relative to the camera, broadcasting the corresponding transforms, as well as publishing visualization markers to rviz.
laser_scan_splitter - a tool which takes a LaserScan message as
input and splits it into an arbitrary number of segments.
laser_ortho_projector - a tool which takes a LaserScan, as well as
a position of the laser in a fixed frame, and outputs the orthogonal projection of the scan, invariant to the roll-, pitch- and z- position of the laser.
point_cloud_filter - a threshold filter for PointCloud messages,
which filters points based on values from any of the cloud's additional channels, such as "confidence" or "intensity"
Additional tools scheduled for release this summer include a driver for the AscTec Autopilot for use with AscTec quadrotor UAVs. We are looking forward to hearing back from you with comments and suggestions on how to improve our software.
Please check it out and help us make improvements, and be on the
look-out for more releases in the coming weeks, including: an ARTag
odometry system, an improved NAO v1.6 compatible version of our NAO
drivers, and more.
Wash U.'s B21r, known as Lewis, is best known for being a mobile robot photographer. Lewis is currently being used for HRI research, and they are also reimplementing the photographer functionality in ROS. Lewis is fully integrated with ROS, including sensor data from 48 sonar sensors, 56 bump sensors, 2 webcams, and a Hokuyo laser rangerfinder. There is also Directed Perception PTU-46 pan-tilt unit that they have mounted the webcams on (driver).
The B21r community will be happy to know that Wash U. has deeply integrated this platform with ROS. They have created an urdf model, complete with meshes for visualizing in rviz, and they have also integrated the B21r with the ROS navigation stack. They are also providing an rwi stack, which includes their rflex driver. The rflex driver is capable of driving other iRobot/RWI robot platforms, including the B18, ATRV, and Magellan Pro.
Wash U. has also integrated their four Videre ERRATICs with ROS. They've named these robots Blood, Sweat, Toil, and Tears, and have equipped them with Hokuyo laser rangerfinders and webcams. The ERRATICs enable them to explore research in multi-robot coordination and control. They're also developing on iRobot Creates using drivers from brown-ros-pkg.
The research at the Media and Machines Lab has led to several interfaces and visualizations for using robots. This includes RIDE (Robot Interactive Display Environment), which takes cues from Real Time Strategy (RTS) video games to provide an interface for easily controlling multiple robots simultaneously. They have also developed a visualization for mapping sensor data over time for search tasks and a 3D interface for binocular robots. RIDE is available in the ride stack, and much of their other research will soon be released in wu-ros-pkg.
While the program will leverage the common hardware platform of the PR2, it is also a big benefit to the broader ROS community as a whole. All of the participants will be releasing work as open source, and much of this work will be immediately applicable to other robot platforms. For example:
KU Leuven will be working on improving integration between ROS and Orocos, as well as integrating ROS with other open-source libraries like Blender.
JSK will be working on integrating ROS, OpenRAVE, and EusLisp.
Bosch will be providing sensors like accelerometers, gyros, pressure sensors, and skins to participants, which will hopefully lead to new approaches and libraries for these types of sensors.
These are just a few examples, and you can read the announcement for more. There will be numerous libraries in perception, mapping, planning, manipulation, and more that we hope the ROS community will be able to build upon.
Many of the participating institutions have already started ROS repositories, including:
These repositories and more will be very active over the next two years, and we encourage the greater ROS community to take part by using the many open-source libraries for ROS in exciting new applications for robotics.
The ROS community has grown an amazing amount this year. As the Robots Using ROS has illustrated, there are all types of robots using ROS, from mobile manipulators, to autonomous cars, to small humanoids. As the types of robots has increased, so too has the variety of software you can use with ROS, whether it be hardware drivers, libraries like exploration, or even code for research papers. This diversity has allowed all types of developers, including researchers, software engineers, and students, to participate in this growing community.
Today we officially crossed the 1000 ROS package milestone. This is due in no small part to the many new ROS repositories that have come online this year. We are now tracking 25 separate ROS repositories that are providing open source code, including repositories from:
The Intelligent Autonomous Systems Group at TU MÃ¼nchen (TUM) built TUM-Rosie with the goal of developing a robotics system with a high-degree of cognition. This goal is driving research in 3D perception, cognitive control, knowledge processing, and highlevel planning. TUM is building their research on TUM-Rosie using ROS and has setup the open-source tum-ros-pkg repository to share their research, libraries, and hardware drivers. TUM has already released a variety of ROS packages and is in the process of releasing more.
TUM-Rosie is a mobile manipulator built on a Kuka mecanum-wheeled omnidrive base, with two Kuka LWR-4 arms and DLR-HIT hands. It has a variety of sensors for accomplishing perception tasks, including a SwissRanger 4000, FLIR thermal camera, Videre stereo camera, SVS-VISTEK eco274 RGB cameras, a tilting "2.5D" Hokuyo UTM-30LX lidar, and both front and rear Hokuyo URG-04LX lidars.
One of the new libraries that TUM is developing is the cloud_algos package for 3D perception of point cloud data. cloud_algos is being designed as an extension of the pcl (Point Cloud Library) package. The cloud_algos package consists of a set of point-cloud-processing algorithms, such as a rotational object estimator. The rotational object estimator enables a robot to create models for objects like pitchers and boxes from incomplete point cloud data. TUM has already released several packages for semantic mapping and cognitive perception.
TUM is also working on systems that combine knowledge reasoning with perception. The K-COPMAN (Knowledge-enabled Cognitive Perception for Manipulation) system in the knowledge stack generates symbolic representations of perceived objects. This symbolic representation allows a robot to make inferences about what is seen, like what items are missing from a breakfast table.
In the field of knowledge processing and reasoning for personal robots, TUM developed the KnowRob system that can provide:
spatial knowledge about the world, e.g. the positions of obstacles
ontological knowledge about objects, their types, relations, and properties
common-sense knowledge, for instance, that objects inside a cupboard are not visible from outside unless the door is open
knowledge about the functions of objects like the main task a tool serves for or the sequence of actions required to operate a dishwasher
KnowRob is part of the tum-ros-pkg repository, and there is a wiki with documentation and tutorials.
At the high level, TUM is working on CRAM (Cognitive Robot Abstraction Machine), which provides a language for programming cognitive control systems. The goal of CRAM is to allow autonomous robots to infer decisions, rather than just having pre-programmed decisions. Practically, the approach will enable tackling of the complete pick-and-place housework cycle, which includes setting the table, cleaning the table as well as loading the dishwasher, unloading it and returning the items to their storage locations. CRAM features showcased in this scenario include the probabilistic inference of what items should be placed where on the table, what items are missing, where items can be found, which items can and need to be cleaned in the dishwasher, etc. As robots become more capable, it will be much more difficult to explicitly program all of their decisions in advance, and the TUM researchers hope that CRAM will help drive AI-based robotics.
Researchers at TUM have also made a variety of contributions to the core ROS system, including many features for the roslisp client library. They are also maintaining research datasets for the community, including a kitchen dataset and a semantic database of 3d objects, and they have contributed to a variety of other open-source robotics systems, like YARP and Player/Stage.
Research on the TUM-Rosie robot has been enabled by the Cluster of Excellence CoTeSys (Cognition for Technical Systems).
For more information:
The Modlab at Penn designed the CKBot (Connector Kinetic roBot) module to be fast, small, and inexpensive. These qualities enable it to be used to explore the promise of modular robotics systems, including adaptability, reconfigurability, and fault tolerance. They've researched dynamic rolling gaits, which use a loop configuration to achieve speeds of up to 1.6/ms, as well as bouncing gaits by attaching passive legs. They are also using the CKBots to research the difficult problem of configuration recognition, and, for the Terminator 2 fans, they have even demonstrated "Self re-Assembly after Explosion" (SAE).
More recently, Modlab has developed ROS packages that can be used when the CKBots are connected to a separate ROS system. They have also created an open source repository, modlab-ros-pkg, for CKBot ROS users. The CKBot modules only have a few PIC processors -- not enough to run ROS -- so an off-board system enables them to use algorithms that require more processing power. In one experiment, they used a camera to locate AR tags on the CKBot modules. The locations were stored in tf, which was used to calculate coordinate transforms between modules. They have also used rviz to display the estimated position of modules during SAE when AR tags were not in use.
One of the projects Modlab is currently working on is a "mini-PR2" made out of CKBot modules. The mini-PR2 will be kinematically similar to the Willow Garage PR2 and is powered by a separate laptop. You can see an early prototype of mini-PR2 opening an Odwalla fridge:
Like most research robots, it's frequently reconfigured: they added an additional Mac mini, Flea camera, and Videre stereo camera for some recent work with visual localization.
Bosch RTC has been releasing drivers and libraries in the bosch-ros-pkg repository. They will be presenting their approach for mapping and texture reconstruction at ICRA 2010 and hope to release the code for that as well. This approach constructs a 3D environment using the laser data, fits a surface to the resulting model, and then maps camera data onto the surfaces.
Researchers at Bosch RTC were early contributors to ROS, which is remarkable as bosch-ros-pkg is the first time Bosch has ever contributed to an open source project. They have also been involved with the ros-pkg repository to improve the SLAM capabilities that are included with ROS Box Turtle, and they have been providing improvements to a visual odometry library that is currently in the works.
The Healthcare Robotics Lab focuses on robotic manipulation and human-robot interaction to research improvements in healthcare. Researchers at HRL have been using ROS on EL-E and Cody, two of their assistive robots. They have also been publishing their source code at gt-ros-pkg.
HRL first started using ROS on EL-E for their work on Physical, Perceptual, and Sematic (PPS) tags (paper). EL-E has a variety of sensors and Katana arm mounted on a Videre ERRATIC mobile robot base. The video below shows off many of EL-E's capabilities, including a laser pointer interface -- people select objects in the real-world for the robot to interact with using a laser pointer.
Whether it's providing open source drivers for commonly used hardware, CAD models of their experimental hardware, or source code to accompany their papers, HRL has embraced openness with their research. For more information:
Like the Aldebaran Nao, the "Prairie Dog" platform from the Correll Lab at Colorado University is an example of the ROS community building on each others' results, and the best part is that you can build your own.
Prairie Dog is an integrated teaching and research platform built on top of an iRobot Create. It's used in the Multi-Robot Systems course at Colorado University, which teaches core topics like locomotion, kinematics, sensing, and localization, as well as multi-robot issues like coordination. The source code for Prairie Dog, including mapping and localization libraries, is available as part of the prairiedog-ros-pkg ROS repository.
Starting in the Fall of 2010, RoadNarrows Robotics will be offering a Prairie Dog kit, which will give you all the off-the-shelf components, plus the extra nuts and bolts. Pricing hasn't been announced yet, but the basic parts, including a netbook, will probably run about $3500.
The Care-O-bot 3 is a mobile manipulation robot designed by Fraunhofer IPA that is available both as a commercial robotic butler, as well as a platform for research. The Care-O-bot software has recently been integrated with ROS, and, in just short period of time, already supports everything from low-level device drivers to simulation inside of Gazebo.
The robot has two sides: a manipulation side and an interaction side. The manipulation side has a SCHUNK Lightweight Arm 3 with SDH gripper for grasping objects in the environment. The interaction side has a touchscreen tray that serves as both input and "output". People can use the touchscreen to select tasks, such as placing drink orders, and the tray can deliver objects to people, like their selected beverage.
The goals of the Care-O-bot research program are to:
provide a common open source repository for the hardware platform
provide simulation models of hardware components
provide remote access to the Care-O-bot 3 hardware platform
Those first two goals are supported by the care-o-bot open source repository for ROS, which features libraries for drivers, simulation, and basic applications. You can easily download the source code and perform a variety of tasks in simulation, such as driving the base and moving the arm. These support the third goal of providing remote access to physical Care-O-Bot hardware via their webportal.
For sensing, the Care-O-bot uses two SICK S300 laser scanners, a Hokuyu URG-04LX laser scanner, two Pike F-145 firewire cameras for stereo, and Swissranger SR3000/SR4000s. The cob_driver stack provides ROS software integration for these sensors.
The Care-O-bot runs on a CAN interface with a SCHUNK LWA3 arm, SDH gripper, and a tray mounted on a PRL 100 for interacting with its environment. It also has a SCHUNK PW 90 and PW 70 pan/tilt units, which give it the ability to bow through its foam outer shell. The CAN interface is supported through several Care-O-bot ROS packages, including cob_generic_can and cob_canopen_motor, as well as wrappers for libntcan and libpcan. The SCHUNK components are also supported by various packages in the cob_driver stack.
The video below shows the Care-O-bot in action. NOTE: as the Care-O-bot source code is still being integrated with ROS, the capabilities you see in the video are not part of the ROS repository.
The Aldebaran Nao is a commercially available, 60cm tall, humanoid robot targeted at research lab and classrooms. The Nao is small, but it packs a lot into its tiny frame: four microphones, two VGA cameras, touch sensors on the head, infrared sensors, and more. The use of Nao with ROS has demonstrated how quickly open-source code can enable a community to come together around a common hardware platform.
The first Nao driver for ROS was released by Brown University's RLAB in November of 2009. This initial release included head control, text-to-speech, basic navigation, and access to the forehead camera. Just a couple of days later, the University of Freiburg's Humanoid Robot Lab used Brown's Nao driver to develop new capabilities, including torso odometry and joystick-based tele-operation. Development didn't stop there: in December, the Humanoid Robot Lab put together a complete ROS stack for the Nao that added IMU state, a URDF robot model, visualization of the robot state in rviz, and more.
The Nao SDK already comes with built-in support for the open-source OpenCV library. It will be exciting to see what additional capabilities the Nao will gain now that it can be connected to the hundreds of different ROS packages that are freely available.
Brown is also using open source and ROS as part of their research process:
Publishing our ROS code as well as research papers is now an integral part of disseminating our work. ROS provides the best means forward for enabling robotics researchers to share their results and more rapidly advance the state-of-the-art.
-- Chad Jenkins, Professor, Brown University
The University of Freiburg's Nao stack is available on alufr-ros-pkg. Brown's Nao drivers are available on brown-ros-pkg, along with drivers for the iRobot Create and a Gstream-based webcam driver.
With so many open-source repositories offering ROS libraries, we'd like to highlight the many different robots that ROS is being used on. It's only fitting that we start where ROS started with STAIR 1: STanford Artificial Intelligence Robot 1. Morgan Quigley created the Switchyard framework to provide a robot framework for their mobile manipulation platform, and it was the lessons learned from building software to address the challenges of mobile manipulation robots that gave birth to ROS.
Solving problems in the mobile manipulation space is too large for any one group. It requires multiple teams tackling separate challenges, like perception, navigation, vision, and grasping. STAIR 1 is research robot built to address these challenges: a Neuronics Katana Arm, a Segway base, and an ever-changing array of sensors, including a custom laser-line scanner, Hokuyo laser range finder, Axis PTZ, and more. The experience developing for this platform in a research environment provided many lessons for ROS: small components, simple reconfiguration, lightweight coupling, easy debugging, and scalable.
STAIR 1 has tackled a variety of research challenges, from accepting verbal commands to locate staplers, to opening doors, to operating elevators. You can watch the video of STAIR 1 operating an elevator below, and you can watch more videos and learn more about the STAIR program at stair.stanford.edu. You can also read
Morgan's slides on ROS and STAIR from an IROS 2009 workshop.
In addition to the many contribution made to the core, open-source ROS system, you can also find STAIR-specific libraries at sail-ros-pkg.sourceforge.net/, including the code used for elevator operation.
We're pleased to announce that all of the packages within the Brown
Ros Pkg collection have been updated for ROS-1.0.X compatibility. The
Brown automated install script has also been updated to reflect these changes as well. This update should fix any problems caused by using the (formerly) 0.9.0 release with an updated roscore.
We encourage anyone interested to visit brown-ros-pkg . We currently have a driver for the iRobot Create, a basic driver for the Aldebaran Nao, and some basic vision packages.
Additions and improvements include:
A ROS webcam capture node new to this release. Probe leverages
Gstreamer, making it compatible with almost every Linux camera and
video system available. In addition Gstreamer's software video
processing can be used to emulate advanced features (e.g.
white-balancing) even for cameras that don't have the appropriate v4l
A simple keyboard-based teleop interface inspired by teleop_base. It's only dependencies are on the necessary geometry messages.
As always, we're interested in the communities feedback and suggestions.
Fast on the heels of the Brown Nao driver, Armin Hornung of Albert-Ludwigs-UnversitÃ¤t Freiburg has announced joy-package compatibility and torso odometry additions for the Nao driver -- as well as the alufr-ros-pkg repository. Armin's announcement is below.
Based on the recently announced Nao driver by Brown University, there is now a regular joystick teleoperation node available. It operates Nao using messages from the "joy" topic, so it should work with any gamepad or joystick in ROS. In addition, the control node running on Nao returns some basic torso odometry estimate. The code is available at http://code.google.com/p/alufr-ros-pkg/ to checkout via SVN. A README with more details can be found in the "nao" stack there.
ROS is set up for distributed Open Source development. The system is designed to be built from a federation of repositories, each potentially run by a different organization. We've been happy to see three new ROS package repositories sprout up (besides those already hosted by Stanford, CMU, MIT, and TUM).
bosch-ros-pkg: Bosch has been extending our navigation stack to support frontier-based exploration and they have been contributing patches back to the personalrobots repository. Their wrapper for libgphoto2 has also been getting some use on the ros-users mailing list
wu-ros-pkg: Washington University is also setting up a ROS package repository (which the many Wash-U alumni at Willow Garage are happy to see). Bill Smart's presentation, "Is a Common Middleware for Robotics Possible?" at IROS 2007 helped guide the philosophy and goals of ROS in its earliest stages, especially the focus on making reusable libraries.
A lot of people ask, "How is ROS different from X?" where X is another robotics software platform. For us, ROS was never about creating a platform with the most features, though we knew that the PR2 robot would drive many requirements. Instead, ROS is about creating a platform that would support sharing and collaboration. This may seem like an odd goal for a software framework, especially as it means that we want our code to be useful with or without ROS, but we think that one of the catalysts for robotics will be broad libraries of robotics drivers and algorithms that are freely available and easily integrated into any framework.
We're excited to see so many public ROS package repositories have already emerged. Here's the list that we know of:
personalrobots.sourceforge.net: (see update) focused on Personal Robotics applications. Willow Garage is helping to maintain this repository with contributions from many others (Stanford, UPenn, TUM, CMU, MIT, etc...).
contains several drivers released by Radu Rusu of TUM, who has also
been busy with his many contributions to the personalrobots repository
code.google.com/p/lis-ros-pkg: contains an interface for the Barrett WAM and Hand, written by Kaijen Hsiao of MIT's Learning and Intelligent Systems Group
An ecosystem of federated package repositories is as important to ROS as the ROS node is for powering the distributed system of processes. Just as the ROS node is the unit of a ROS runtime, the ROS package is the unit of code sharing and the ROS package repository is the unit of collaboration. Each provides the opportunity for independent decisions about development and implementation, but all can be brought together with ROS infrastructure tools.
One of the tools we've written to support these federated repositories is roslocate. roslocate solves the problem of "where is package X?" For example, you can type "svn co `roslocate svn imagesift`" to quickly checkout the source code for the "imagesift" package from cmu-ros-pkg. As more ROS repositories emerge, we will continue to refine our tools so that multiple repositories can easily be an integral component of ROS development.
If you have any public repositories of your own that you'd like to share, drop us a note on ros-users and we'll add it to our list.