Recently in report Category

Looking foward to ROSCon 2017 we're highlighting presentations from last year. The ROSCon 2017 registration is currently open.



FlexGui 4.0 is based upon popular web technologies: HTML5, CSS and JavaScript. This way it is possible to run FlexGUI 4.0 on PC, Android, iPhone, Windows Phone and generally on every device with a modern browser, you will h ave exactly the same user experience on each of them. FlexGui 4.0 communicates using ROS, our choice of middleware for Industry 4.0, IoT. Join the session and let us introduce ourselves, see some of our industrial applications, ask us how could we help you with yours!


View the slides here

Levi Armstrong (SwRI) ROS Qt Creator Project Manager Plug-in

Looking foward to ROSCon 2017 we're highlighting presentations from last year. The ROSCon 2017 registration is currently open.

Here's Levi presenting his work to integrate ROS projects into the QT Creator pipeline.



The ROS Qt Creator Plug-in is developed specifically for ROS to increase developers' efficiency by simplifying tasks and creating a centralized location for ROS tools. Since it is built on top of the Qt Creator platform, users have access to all of its existing features like: syntax highlighting, editors (C++, Python, etc.), code completion, version control (Git, Subversion, etc.), debuggers (GDB, CDB, LLDB, etc.), and much more. The talk will cover: a description and motivation; overview of current and future features; and example of how to use the plug-in to manage a ROS Workspace.


View the slides here

Looking foward to ROSCon 2017 we're highlighting presentations from last year. The ROSCon 2017 registration is currently open.

Mirko Bordignon (Fraunhofer IPA) Min Ling (ARTC - A*STAR) Shaun Edwards (Southwest Research Institute) present the state of ROS-Industrial as it turns 4 years old and expands to add an Asia-Pacific chapter.



Four years after it was first launched by Shaun Edwards, ROS-Industrial is now a worldwide effort to expand and improve ROS adoption in manufacturing environments and industrial equipment. As an initiative, it is supported financially by OEMs and system integrators, whose interests are represented and requests are collected within three ROS-Industrial Consortia managed by non-profit, applied research institutions. As a software platform, it is developed by a free and open community gathering in monthly meetings and operating through a federated development model, much like the ROS developers community. The talk will showcase real-world examples and current efforts to expand the initiative and to continue addressing technical and non-technical concerns.


View the slides here

Looking foward to ROSCon 2017 we're highlighting presentations from last year. The ROSCon 2017 registration is currently open.

In this session Alejandro presents robot_blockly as an approach to make programming robots more accessible.



robot_blockly is a ROS package that allows users to create ROS-based algorithms and behaviors, abstracting its complexity using blocks. The aim of the package is to hide the complexity of programming robots via functional blocks. As a rule of thumb, an average PhD student takes 3 weeks to learn ROS. This makes ROS programming not accessible for the great majority. The robot_blockly package aims to simplify the process of using ROS to the point of putting conceptual blocks together.


View the slides here

Niharika Arora (Fetch Robotics): Robot calibration

Looking foward to ROSCon 2017 we're highlighting presentations from last year. The ROSCon 2017 call for proposals is currently open as well as registration.

In this session Niharika Arora from gives an overview of how Fetch Robotics calibrates their robots using robot_calibration



Calibration is an essential prerequisite for nearly any robot. We have created a fast, accurate and robot­agnostic calibration system, which calibrates robot geometry in addition to the typical camera intrinsics and/or extrinsics. The system can be used with a variety of feature detectors to update the cost function and uses the CERES optimizer for the convex optimization. The system then creates an updated URDF containing the calibrated parameters. This talk will cover the details of the robot­agnostic robot_calibration package and describe its use in the fetch_calibration package which can calibrate dozens of parameters on a Fetch robot in as little as 3 minutes.


View the slides here

Continuing our series highlighting ROSCon 2016 talks. We present Matthiew Amy talking about how to build fault tolerant systems. He first covers the theory and then goes into specifics for how to make ROS systems robust.



Every system evolves during their operational lifetime. A system that remains dependable when facing changes (new threats, failures, updates) is called resilient. We propose an approach to safety and adaptive fault tolerance taking advantage of Component-Based Software Engineering technologies for tackling a crucial aspect of resilient computing, namely the on-line adaptation of fault tolerance mechanisms. We will show how this approach can be implemented on ROS and explain some implementation details and the result of different experiments to validate the solution. We will also discuss the how we can use checkpointing technologies to make the ROS master crash-tolerant



ROSCon 2017

If you're interested in more information like this ROSCon 2017 is coming up! The call for proposals is currently open as well as registration.

At ROSCon 2016 Mukunda gave an overview of how Team Delft took on and won the Amazon Picking Challenge in 2016. The talk provides an overview of the approach the team won, how they leveraged MoveIt, and provides incite into many lessons learned from the experience that can be used by others thinking about similar problems.



This presentation will focus on some of the key MoveIt! practices that we (motion planning team of Team Delft) followed for the Amazon Picking Challenge 2016. Particularly, the following points will be highlighted: 1) making appropriate MoveIt! API choices from a large set of options; 2) difficulties faced such as I/O synchronization with trajectories and collision checking with Octomaps and the corresponding solutions; 3) unsolved problems (mostly with robot driver) while planning around the joint limits of the robot; and 4) general recommendations for OMPL planner configurations with MoveIt!.



If you're interested in more information like this ROSCon 2017 is coming up! The call for proposals is currently open as well as registration.

Deanna Hood, William Woodall (OSRF): ROS 2 Update

Looking foward to ROSCon 2017 we're highlighting presentations from last year.

In this session Deanna and William give an update on the state of ROS2 development.



This talk will summarize the progress made since the last ROSCon update in 2015. Summary will include the alphas released during that time, changes to supported implementations, and the roadmap. The talk will also include demonstrations of new features and highlights of our experiences while using ROS 2 in demos and benchmarking.


View the slides here

The ROSCon 2017 call for proposals is currently open as well as registration.

With ROSCon 2017 preparations getting started we wanted to feature some of the presentations from last year. The call for proposals is currently open as well as registration. To start with here's Steffi and Louise's talk about Gazebo, presented ambitiously as a live demo in Gazebo.


Gazebo is one of the most used simulators in the ROS community. It has been under heavy development for the past few years and its most recent version, Gazebo 7, comes with myriad new tools and features for new and experienced users alike. Recently, Gazebo development has emphasized user-centered design and improved usability. Updates include not only improved GUI tools and documentation for new folks, but also tools that streamline the workflow for experienced users. We explore new features including: Model Editor, Building Editor, apply force tool, logging and playback, model alignment and snap tools, camera angle controls, plotting, introspection and debugging aids, and more.


2017 University Rover Challenge robots using ROS

From Lucas Walter via ROS Discourse

I saw ROS tools or heard mentions of ROS in many of the URC CDR videos that were uploaded a few weeks ago, this is a playlist of them:

I could have easily missed more instances in the other videos:

It's interesting to see all the variations on the rocker bogie suspension system, and there are a handful of exceptions that use more novel approaches (though more of them need to show off the rovers going up or down a real incline and over rough terrain).


Michael Ferguson spent a year as a software engineer at Willow Garage, helping rewrite the ROS calibration system, among other projects. In 2013, he co-founded Unbounded Robotics, and is currently the CTO of Fetch Robotics. At Fetch, Michael is one of the primary people responsible for making sure that Fetch's robots reliably fetch things. Mike's ROSCon talk is about how to effectively use ROS as an integral part of your robotics business, including best practices, potential issues to avoid, and how you should handle open source and intellectual property.

Because of how ROS works, much of your software development (commercial or otherwise) is dependent on many external packages. These packages are constantly being changed for the better -- and sometimes for the worse -- at unpredictable intervals that are completely out of your control. Using continuous integration, consisting of systems that can handle automated builds, testing, and deployment, can help you catch new problems as early as possible. Michael also shares that a useful way to avoid new problems is to not immediately switch over to new software as soon as they are available: instead, stick with long-term support releases, such as Ubuntu 14.04 and ROS Indigo.

While the foundation of ROS is built on open source, using ROS doesn't mean that all of the software magic that you create for your robotics company has to be given away for free. ROS supports many different kinds of licenses, some of which your lawyers will be more happy with than others, but there are enough options with enough flexibility that it doesn't have to be an issue. Using Fetch Robotics as an example, Mike discusses what components of ROS his company uses in their commercial products, including ROS Navigation and MoveIt. With these established packages as a base, Fetch was able to quickly put together operational demos, and then iterate on an operating platform by developing custom plugins optimized for their specific use cases.

When considering how to use ROS as part of your company, it's important to look closely at the packages you decide to incorporate, to make sure that they have a friendly license, good documentation, recent updates, built-in tests, and a standardized interface. Keeping track of all of this will make your startup life easier in the long run. As long as you're careful, relying on ROS can make your company more agile, more productive, and ready to make a whole bunch of money off of the future of robotics.

Next up: Ryan Gariepy (Clearpath Robotics)


It's not sexy, but the next big thing for robots is starting to look like warehouse logistics. The potential market is huge, and a number of startups are developing mobile platforms to automate dull and tedious order fulfillment tasks. Transporting products is just one problem worth solving: picking those products off of shelves is another. Magazino is a German startup that's developing a robot called Toru that can grasp individual objects off of warehouse shelves, a particularly tricky task that Magazino is tackling with ROS.

Moritz Tenorth is Head of Software Development at Magazino. In his ROSCon talk, Moritz describes Magazino's Toru as "a mobile pick and place robot that works together with humans in a shared environment," which is exactly what you'd want in an e-commerce warehouse. The reason that picking is a hard problem, as Moritz explains, is perception coupled with dynamic environments and high uncertainty: if you want a robot that can pick a wide range of objects, it needs to be able to flexibly understand and react to its environment; something that robots are notoriously bad at. ROS is particularly well suited to this, since it's easy to intelligently integrate as much sensing as you need into your platform.

Magazino's experience building and deploying their robots has given them a unique perspective on warehouse commercialization with ROS. For example, databases and persistent storage are crucial (as opposed to a focus on runtime), and real-time control turns out to be less important than being able to quickly and easily develop planning algorithms and reducing system complexity. Software components in the ROS ecosystem can vary wildly in quality and upkeep, although ROS-Industrial is working hard to develop code quality metrics. Magazino is also working on remote support and analysis tools, and trying to determine how much communication is required in a multi-robot system, which native ROS isn't very good at.

Even with those (few) constructive criticisms in mind, Magazino says that ROS is a fantastic way to quickly iterate on both software and hardware in parallel, especially when combined with 3D printed prototypes for testing. Most importantly, Magazino feels comfortable with ROS: it has a familiar workflow, versatile build system, flexible development architecture, robust community that makes hiring a cinch, and it's still (somehow) easy to use.

Next up: Michael Ferguson (Fetch Robotics)

Tom Moore: Working with the Robot Localization Package


Clearpath Robotics is best known for building yellow and black robots that are the research platforms you'd build for yourself; that is, if it wasn't much easier to just get them from Clearpath Robotics. All of their robots run ROS, and Clearpath has been heavily involved in the ROS community for years. Now with Locus Robotics, Tom Moore spent seven months as an autonomy developer at Clearpath. He is the author and maintainer of the robot_localization ROS package, and gave a presentation about it at ROSCon 2015.

robotlocalization is a general purpose state estimation package that's used to give you (and your robot) an accurate sense of where it is and what it's doing, based on input from as many sensors as you want. The more sensors that you're able to use for a state estimate, the better that estimate is going to be, especially if you're dealing with real-worldish things like unreliable GPS or hardware that flakes out on you from time to time. robotlocalization has been specifically designed to be able to handle cases like these, in an easy to use and highly customizable way. It has state estimation in 3D space, gives you per-sensor message control, allows for an unlimited number of sensors (just in case you have 42 IMUs and nothing better to do), and more.

Tom's ROSCon talk takes us through some typical use cases for robot_localization, describes where the package fits in with the ROS navigation stack, explains how to prepare your sensor data, and how to configure estimation nodes for localization. The talk ends with a live(ish) demo, followed by a quick tutorial on how to convert data from your GPS into your robot's world frame.

The robot_localization package is up to date and very well documented, and you can learn more about it on the ROS Wiki.

Next up: Moritz Tenorth, Ulrich Klank, & Nikolas Engelhard (Magazino GmbH)


Matt Vollrath and Wojciech Ziniew work at an ecommerce consultancy called End Point, where they provide support for Liquid Galaxy; a product that's almost as cool as it sounds. Originally an open source project begun by Google engineers on their twenty percent time, Liquid Galaxy is a data visualization system consisting of a collection of large vertical displays that wrap around you horizontally. The displays show an immersive (up to 270°) image that's ideal for data presentations, virtual tours, Google Earth, or anywhere you want a visually engaging environment. Think events, trade shows, offices, museums, galleries, and the like.

Last year, End Point decided to take all of the ad hoc services and protocols that they'd been using to support Liquid Galaxy and move everything over to ROS. The primary reason to do this was ROS support for input devices: you can use just about anything to control a Liquid Galaxy display system, from basic touchscreens to Space Navigator 3D mice to Leap Motions to depth cameras. The modularity of ROS is inherently friendly to all kinds of different hardware.

Check out this week's ROSCon15 video as Matt and Wojciech take a deep dive into their efforts in bringing ROS to bear for these unique environments.

Next up: Tom Moore (Clearpath Robotics)

From OSRF ROS already comes with a fantastic built-in visualization tool called rviz, so why would you want to use anything else? At Southwest Research Institute, Jerry Towler explains how they've created a new visualization tool called Mapviz that's specifically designed for the kind of large-scale outdoor environments necessary for autonomous vehicle development. Specifically, Mapviz is able to integrate all of the sensor data that you need on top of a variety of two-dimensional maps, such as road maps or satellite imagery.

As an autonomous vehicle visualization tool, Mapviz works just like you'd expect that it would, which Jerry demonstrated with several demos at ROSCon. Mapviz shows you a top-down view of where your vehicle is, and tracks it across a basemap that seamlessly pulls image tiles at multiple resolutions from a wide variety of local or networked map servers, including Open MapQuest and Bing Maps. Mapviz is, of course, very plugin-friendly. You can add things like stereo disparity feeds, GPS fixes, odometry, grids, pathing data, image overlays, projected laser scans, markers (including textured markers) from most sensor types, and more. It can't really handle three dimensional data (although it'll do two-and-a-half dimensions via color gradients), but for interactive tracking of your vehicle's navigation and path planning behavior, Mapviz should offer most of what you need.

For a variety of non-technical reasons, SwRI hasn't been able to release all of its tools and plugins as open source quite yet, but they're working on getting approval as fast as they can. They're also in the process of developing even more enhancements for Mapviz, and you can keep up to date with the latest version of the software on GitHub.

Next up: Matt Vollrath & Wojciech Ziniewicz (End Point)

Michael Aeberhard (BMW): Automated Driving with ROS at BMW


BMW has been working on automated driving for the last decade, steadily implementing more advanced features ranging from emergency stop assistance and autonomous highway driving to fully automated valet parking and 360° collision avoidance. Several of these projects were presented at the 2015 Consumer Electronics Show, and as it turns out, the cars were running ROS for both environment detection and planning.

BMW, being BMW, has no problem getting new research hardware. Their latest development platform is the 335I G. This model comes with an advanced driver assistance system based around cameras and radar. The car has been outfitted with four low-profile laser scanners and one long-range radar, but otherwise, it's pretty close (in terms of hardware) to what's available in production BMWs.

Why did BMW choose to move from their internally developed software architecture to ROS? Michael explains how ROS' reputation in the robotics research community prompted his team to give it a try, and they were impressed with its open source nature, distributed architecture, existing selection of software packages, as well as its helpful community. "A large user base means stability and reliability," Michael says, "because somebody else probably already solved the problem you're having." Additionally, using ROS rather than a commercial software platform makes it much easier for BMW to cooperate with universities and research institutions.

Michael discusses the ROS software architecture that BMW is using to do its autonomous car development, and shows how the software interprets the sensor data to identify obstacles and lane markings and do localization and trajectory planning to enable full highway autonomy, based on a combination of lane keeping and dynamic cruise control. BMW also created their own suite of RQT and rviz plugins specifically designed for autonomous vehicle development.

After about two years of experience with ROS, BMW likes a lot of things about it, but Michael and his team do have some constructive criticisms: message transport needs more work (although ROS 2 should help with this), managing configurations for different robots is problematic, and it's difficult to enforce compliance with industry standards like ISO and

Next up: Jerry Towler & Marc Alban (SwRI)


While Intel is best known for making computer processors, the company is also interested in how people interact with all of the computing devices that have Intel inside. In other words, Intel makes brains, but they need senses to enable those brains to understand the world around them. Intel has developed two very small and very cheap 3D cameras (one long range and one short range) called RealSense, with the initial intent of putting them into devices like laptops and tablets for applications such as facial recognition and gesture tracking.

Robots are also in dire need of capable and affordable 3D sensors for navigation and object recognition, and fortunately, Intel understands this, and they've created the RealSense Robotics Innovation Program to help drive innovation using their hardware. Intel itself isn't a robotics company, but as Amit explains in his ROSCon talk, they want to be a part of the robotics future, which is why they prioritized ROS integration for their RealSense cameras.

A RealSense ROS package has been available since 2015, and Intel has been listening to feedback from roboticists and steadily adding more features. The package provides access to the RealSense camera data (RGB, depth, IR, and point cloud), and will eventually include basic computer vision functions (including plane analysis and blob detection) as well as more advanced functions like skeleton tracking, object recognition, and localization and mapping tools.

Intel RealSense 3D camera developer kits are available now, and you can order one for as little as $99.

Next up: Michael Aeberhard, Thomas Kühbeck, Bernhard Seidl, et al. (BMW Group Research and Technology) Check out last week's post: The Descartes Planning Library for Semi-Constrained Cartesian Trajectories


Descartes is a path planning library that's designed to solve the problem of planning with semi-constrained trajectories. Semi-constrained means that the degrees of freedom of the path you need to plan are fewer than the degrees of freedom that your robot has. In other words, when planning a path, there are one or more "free" axes that your robot has to work with that can be moved any which way without disrupting the path. This can open up the planning space if you can utilize them creatively, which traditional robots (especially in the industrial space) usually can't. This results in reduced workspaces and (most dangerous of all) increased reliance on human intuition during the planning process.

Descartes was designed to generate common sense plans, exhibiting similar characteristics to paths planned by a human. It can solve easy problems quickly, and difficult problems eventually, integrating hybrid trajectories and dynamic replanning. It's easy to use, with a GUI that allows you to quickly set anchor points that the robot replans around, with visual confirmation of the new path. The second half of Shaun's ROSCon talk is an in-depth explanation of Descartes' interfaces and implementations intended for path planning fans (you know who you are).

As with many (if not most) of the projects being presented at ROSCon, Descartes is open source, and all of the development is public. If you'd like to try it out, the current stable release runs on ROS Hydro, and a tutorial is available on the ROS Wiki to help you get started.

Next up: Amit Moran & Gila Kamhi (Intel) Check out last week's post: Phobos -- Robot Model Development on Steroids


To model a robot in rviz, you first need to create what's called a Unified Robot Description Format (URDF) file, which is an XML-formatted text file that represents the physical configuration of your robot. Fundamentally, it's not that hard to create a URDF file, but for complex robots, these files tend to be enormously complicated and very tedious to put together. At the University of Bremen, Kai von Szadkowski was tasked with developing a URDF model for a 60 degrees of freedom robot called MANTIS (Multi-legged Manipulation and Locomotion System). Kai got a bit fed up with the process and developed a better way of doing it, called Phobos.




Phobos is an add-on for a piece of free and open-source 3D modeling and rendering software called Blender. Using Blender, you can create armatures, which are essentially kinematic skeletons that you can use to animate a 3D character. As it turns out, there are some convenient parallels between URDF models and 3D models in Blender: the links and joints in a URDF file equate to armatures and bones in Blender, and both use similar hierarchical structures to describe their models. Phobos adds a new toolbar to Blender that makes it easy to edit these models by adding links, motors, sensors, and collision geometries. You can also leverage Blender's Python scripting environment to automate as much of the process as you'd like. Additionally, Phobos comes with a sort of "robot dictionary" in Python that manages all of the exporting to URDF for you.


Since the native URDF format can't handle all of the information that can be incorporated into your model in Blender, Kai proposes an extended version of URDF called SMURF (Supplemental Mostly Universal Robot Format) that adds YAML files to a URDF, supporting annotations for sensor, motors, and anything else you'd like to include.


If any of this sounds good to you, it's easy to try it out: Blender is available for free, and Phobos can be found on GitHub.


Dave Coleman has worked in (almost) every robotics lab there is: Willow Garage, JSK Humanoids Lab in Tokyo, Google, UC Boulder, and (of course) OSRF. He's also the owner of PickNik, a ROS consultancy that specializes in training robots to destructively put packages of Oreo cookies on shelves. Dave has been working on MoveIt! since before it was first released, and to kick off the second day of ROSCon, he gave a keynote to share everything he knows about motion planning in ROS.

MoveIt! is a flexible and robot agnostic motion planning framework that integrates manipulation, 3D perception, kinematics, control, and navigation. It's a collaboration between lots of people across many different organizations, and is the third most popular ROS package with a fast-growing community of contributors. It's simple to set up and use, and for beginners, a plugin lets you easily move your robot around in Rviz.

As a MoveIt! pro, Dave offers a series of pro tips on how to get the most out of your motion planner. For example, he suggests that researchers try using C++ classes individually to avoid getting buried in a bunch of layered services and actions. This makes it easier to figure out why your code doesn't work. Dave also describes his experience in the Amazon Picking Challenge, held last year at ICRA in Seattle.

MoveIt! is great, but there's still a lot of potential for improvement. Dave discusses some of the things that he'd like to see, including better reliability (and more communicative failures), grasping support, and, as always, more documentation and better tutorials. A recent MoveIt! community meeting resulted in a future roadmap that focuses on better humanoid kinematic support and support for other types of planners, as well as integrated visual servoing and easy access to calibration packages.

Dave ends with a reminder that progress is important, even if it's often at odds with stability. Breaking changes are sometimes necessary in order to add valuable features to the code. As with much of ROS, MoveIt! depends on the ROS community to keep it capable and relevant. If you're an expert in one of the components that makes MoveIt! so useful, you should definitely consider contributing back with a plug-in from which others can take advantage.

Next up: Mirko Bordignon (Fraunhofer IPA), Shaun Edwards (SwRI), Clay Flannigan (SwRI), et al. Check out last week's post: Real-time Performance in ROS 2

Jackie Kay (OSRF): Real-time Performance in ROS 2

| No Comments | No TrackBacks

Jackie Kay was upgraded from OSRF intern to full-time software engineer in 2014. Her background includes robotics education and path planning for autonomous lunar rovers. More recently, she's been working on bringing real-time computing to ROS 2.

Real-time computing isn't about computing at a certain speed-- it's about computing on schedule. It means that your system can return data reliably and on time, in situations where responding late is usually bad thing; and sometimes a really bad thing. Hard real-time computing is important in safety critical applications (like nuclear reactors, spacecraft, and autonomous vehicles), when taking too long thinking about something could result in a figurative or literal crash -- or both. Soft real-time computing is a bit more forgiving, in that things running behind have a cost, but the data are still usable, as with packets arriving out of order while streaming video. And in between there's firm real-time computing, where missing deadlines is definitely bad but nothing explodes (or things only explode a little bit), like on a robotic assembly line.

Making a system that's adaptable and reliable, especially in the context of commercialization, often requires real-time computing, and this is why integrating real-time compatibility is one of the primary goals of ROS 2. Jackie's keynote addresses many of the technical details underlying the ROS 2 real-time approach, including scheduling, memory management, node design, and communications strategies. To illustrate the improvements that ROS 2 has over ROS, Jackie shares benchmarking results of a ROS 2 demo running in real-time, showing that even under stress, implementing a high performance soft real-time system in ROS 2 looks promising.

To try real-time computing in ROS 2 for yourself, you can download an Alpha release and play around with a demo here:

ROSCon 2015 Hamburg: Day 1 - Jackie Kay: Real-time Performance in ROS 2 from OSRF on Vimeo.

Next up: Dave Coleman (University of Colorado Boulder) Check out last week's post: State of ROS 2


ROS has been an enormously important resource for the robotics community. It turned eight years old at the end of 2015, and is currently on its ninth official release. As ROS adoption has skyrocketed (especially over the past several years), OSRF, together with the community, have identified many specific areas of the operating system that need major overhauls in order to keep pace with maturing user demand. Dirk Thomas, Esteve Fernandez, and William Woodall from OSRF gave a preview at ROSCon 2015 of what to expect in ROS 2, including multi-robot systems, commercial deployments, microprocessor compatibility, real time control, and additional platform support.

The OSRF team shows off many of the exciting new ROS 2 features in this demo-heavy talk, including distributed message passing through DDS (no ROS master required), performance boosts for communications within nodes, quality of service improvements, and ways of bridging ROS 1 and ROS 2 so that you don't have to make the leap all at once. If you'd like to make the leap all at once anyway, the Alpha 1 release of ROS 2 has been available since last September, and Thomas ends the talk with a brief overview of the roadmap leading up to ROS 2's Alpha 2 release. As of April 2016, ROS 2 is on release Alpha 5 ("Epoxy"), and you can keep up-to-date on the roadmap and release schedule here.

ROSCon 2015 Hamburg: Day 1 - Dirk Thomas: State of ROS 2 - demos and the technology behind from OSRF on Vimeo.

Next up: Jackie Kay (OSRF) & Adolfo Rodríguez Tsouroukdissian (PAL Robotics) Check out last week's post: Lightning Talk highlights

ROSCon: Lightning Talk Highlights

| No Comments | No TrackBacks


he growing popularity of ROSCon means that it's not always possible to schedule presentations for everyone that wants to give one. In addition, many people have projects that they'd like to share, but don't need a full twenty minutes to present. That's why forty minutes of each day at ROSCon are set aside for any attendee to present anything they want; all in a heartlessly rigid three-minutes-or-less format. Here are a few highlights:

Talk 1 (00:05 -- 02:15) Víctor Mayoral Vilches, Erle Robotics

Victor is the CTO and co-founder of Erle Robotics. The Erle-Brain 2 is an open source, open hardware controller for robots based on the Raspberry Pi 2. It runs ROS, will support ROS 2, and can be used as the brain for all kinds of different robots, including the Erle Spider, a slightly misnamed hexapod that you can buy for €599.

Talk 3 (06:55 -- 10:00): Andreas Bihlmaier, KIT

Andreas works on robot-assisted surgery using ROS at Karlsruhe Institute of Technology. KIT has a futuristic operating room full of robots and sensors designed to help human doctors and nurses through positional tracking, augmented reality, and direct robotic assistance. Andreas is also interested in collaborating with people on ROS Medical, which doesn't exist yet but has a really cool logo anyway.

Talk 10 (29:20 -- 31:30) Jochen Sprickerhof, Universitat Osnabrück

Through the efforts of Jochen Sprickerhof and Leopold Avellaneda, there are now ROS packages available upstream in Debian unstable and Ubuntu Xenial that can be installed from the main Debian and Ubuntu repositories. The original ROS packages have been modified to follow Debian guidelines, which includes splitting packages into multiple pieces, changing names in some cases, installing to /usr according to FHS guidelines, and using soversions on shared libraries.

ROSCon 2015 Hamburg: Day 1 - Lightning Talks from OSRF on Vimeo.

Next up: Dirk Thomas, William Woodall (OSRF) & Esteve Fernandez Check out last week's post: Ralph Seulin of CNRS


The first step in doing something new, useful, and exciting with ROS is -- without exception -- learning how to use ROS. Ralph Seulin is part of CNRS in France, which, along with universities in Spain and Scotland, collaboratively offer a masters course in robotics and computer vision that includes a focus on ROS. Over four semesters, between 30 and 40 students go through the program. In this talk, Seulin discusses how ROS is taught to these students, as well as what kinds of research they leverage that knowledge into.

Before Seulin's group could effectively teach ROS to students, they had to learn ROS for themselves. This was a little bit more difficult way back in 2013 than it is now, but they took advantage of the ROS Wiki , read all the books on ROS they could get ahold of, and of course made sure to attend ROSCon. From there, Seulin developed a series of tutorials for his students, starting with simulations and ending up with practical programming in ROS on the TurtleBot 2. Ultimately, students spend 250 hours on a custom robotics project that integrates motion control, navigation and localization, and computer vision tasks.

Seulin also makes use of ROS in application development. One of those applications is in precision vineyard agriculture because, as Seulin explains, "we come from Burgundy." Using lasers mounted on a tractor to collect and classify 3D data, a prototype robot tractor can be used to analyze vineyard canopies and estimate leaf density. With this information, vineyards can dynamically adjust the application of agricultural chemicals, using just the right amount and only where necessary. Better for plants, better for humans, thanks to ROS.

ROSCon 2015 Hamburg: Day 1 - Ralph Seulin: ROS for education and applied research: practical experiences from OSRF on Vimeo.

Next up: Dirk Thomas, William Woodall (OSRF) & Esteve Fernandez Check out last week's post: Daniel Di Marco of Bosch


Daniel Di Marco is part of Deepfield Robotics, a 20 person agricultural robotics startup within Bosch. Deepfield makes robots that can, among other things, visually locate and then brutally destroy weeds by pounding them into the dirt. In order to deliver software to their customers, Deepfield decided to create its own build farm, and Di Marco's ROSCon presentation explains why managing a build farm internally is a good idea for a startup.

A build farm is a system that can automatically create Debian packages for you, while running integrated unit tests and generating documentation. OSRF already supports all of ROS with its own build farm, so why would anyone want to set up a build farm for themselves instead? Simple, says Di Marco: it's something you should do if you actually want to make money with your robots.

If ROS is a part of your thriving robotics business, running a build farm allows you to do several important things. First, since you're hosting your code on your own servers, you can maintain control over it, protecting your intellectual property and any proprietary components that you may be using. Second, you can use your build farm to distribute your packages directly to your customers, who are (presumably) paying you, and not to just anybody who swings by and wants to snag them. And lastly, you can decide what versions of different packages you want to keep using, rather than being subjected to upgrades that may not work as well.

Di Marco concludes by discussing why Docker is an easy and reliable foundation for a build farm, and how to get it set up. Most of the process has been scripted, thanks to some hard work at OSRF, and Di Marco walks us through an initial deployment to help you get your own build farm up and running.

ROSCon 2015 Hamburg: Day 1 - Daniel Di Marco: Docker-based ROS Build Farm from OSRF on Vimeo.

Next up: Ralph Seulin, Raphael Duverne, and Olivier Morel (CNRS - Univ. Bourgogne Franche-Comte) Check out last week's post: Ruffin White of Georgia Tech

Originally posted on

Morgan Quigley is first author of the authoritative 2009 workshop paper on the Robot Operating System. He's been Chief Architect at OSRF since 2012, and in 2013, MIT Tech Review awarded Quigley a prestigious TR35 award. In addition to software development, Quigley knows a thing or two about hardware: he helped Sandia National Labs design high-efficiency bipeds for DARPA, and he also gave Sandia a hand with the development of their sensor-rich, high-DOF robotic hand.

Quigley's ROSCon talk is focused on small (but not tiny) microcontrollers: 32-bit MCUs running at a few hundred megahertz or so, with USB and Ethernet connections. While these types of processors can't power smartphones or run Linux, they are found in many popular embedded systems, such as the Pixhawk PX4 autopilot. Microcontrollers like these would be much easier to integrate if they all operated under a standardized communication protocol, but there are enough inconvenient hoops that have to be jumped through to run ROS on them that it's usually not worth the hassle.

ROS 2, which doesn't rely on a master node and has native UDP message passing, promises to work much better than ROS on distributed embedded systems. To make ROS 2 fit on a small microcontroller, Quigley demonstrates a few applications of FreeRTPS, a portable, embedded-friendly implementation of the network protocol underlying ROS 2.

After showing the impressive results of some torture tests on a Discovery board, Quigley talks about what's coming next, including a focus on even smaller microcontrollers (like Arduino boards that communicate over USB rather than Ethernet). Eventually, Quigley suggests that ROS 2 will be small and light enough to run on the microcontrollers inside sensors and actuators themselves, simplifying real-time control.

ROSCon 2015 Hamburg: Day 1 - Morgan Quigley: ROS 2 on "small" embedded systems from OSRF on Vimeo.

Next up: Ruffin White of Institute for Robotics & Intelligent Machines at Georgia Tech Check out last week's post: Roman Bapst of ETH Zurich and PX4

ROS Live Worldwide Virtual Meetup Report

| No Comments | No TrackBacks
From David Lu! via ros-users@

Thanks to everyone who participated in last week's ROS Live meeting on
Robot Description Formats.

You can listen to the audio and see notes here:
It has also been released as a podcast for those of you who are so inclined.

To continue the discussion about the future of Robot Description
Formats, OSRF has set up a Discourse page.

You can vote on topics for the next meeting on this page:


PX4 is a flight control software stack for autonomous aerial robots that describes itself as "rocket science straight from the best labs, powering anything from racing to cargo drones." One of these labs is at ETH Zurich, where Roman Bapst serves on the faculty. Bapst works on computer vision and actively contributes to the PX4 autopilot platform.

Bapst starts out by describing some of the great things about the PX4 autopilot: it's open source, open hardware, and supported by the Linux Foundation's Dronecode Project, which provides a framework under which developers can contribute to an open source standard platform for drones. PX4 runs on 3DRobotics' Pixhawk hardware, and once you hook up some sensors and servos, it will autonomously pilot nearly anything that flies - from conventional winged aircraft to multicopters to hybrids.

One of PX4's unique features is its modularity, which is fundamentally very similar in structure to ROS. This means that you can run PX4 modules as ROS nodes, while taking advantage of other ROS packages under PX4 to do things like vision based navigation and control. Additionally, it lets you easily simulate PX4-based drones within Gazebo, which, unlike real life, has a free reset button that you can push after a crash.

The PX4 team is currently getting their software modules running as ROS nodes on Qualcomm's Snapdragon Flight drone development platform, which would be a very capable and (affordable) way of getting started with a custom autonomous drone.

ROSCon 2015 Hamburg: Day 1 - Roman Bapst: ROS on DroneCode Systems from OSRF on Vimeo.

Next up: Morgan Quigley of OSRF Check out last week's post: Gary Servín of Ekumen

Gary Servín (Ekumen): ROS android_ndk: What? Why? How?

| No Comments | No TrackBacks

What's ROS android_ndk? Why should you care? How does it work? Gary Servín is with Ekumen, an engineering and software consulting company based in Buenos Aires, Argentina that specializes in ROS, web and Android development. With the backing of Qualcomm and OSRF, Ekumen has been trying to make it both possible and easy to run ROS applications on small Android devices like tablets and cellphones using Android's native language development kit.

As Gary explains, the increasing performance, decreasing cost, and overall ubiquity of Android devices make them ideal brains for robots. The tricky part is getting ROS packages to play nice with Android, which is where ROS android_ndk comes in: it's a set of scripts that Ekumen is working on to make the process much easier. Unlike rosjava, ROS android_ndk gives you access to 181 packages from the desktop variant of ROS, with the ability to run native ROS nodes directly.

Ekumen is actively working on this project, with plans to incorporate wrappers for rosjava, actionlib implementation, and support for ROS 2. In the meantime, there's already a set of tutorials on that should help you get started.

ROSCon 2015 Hamburg: Day 1 - Gary Servin: ROS android_ndk: What? Why? How? from OSRF on Vimeo.

Next up: Lorenz Meier & Roman Bapst of ETH Zurich and PX4 Check out last week's post: Stefan Kohlbrecher of Technische Universitaet Darmstadt
Originally posted on

Stefan's early research using tiny soccer-playing humanoid robots at Technische Universität Darmstadt in Germany prepared him well for software development on much larger humanoid robots that don't play soccer at all. From the Darmstadt Dribblers RoboCup team to the DARPA Robotics Challenge Team ViGIR, Stefan has years of experience with robots that need to walk on two legs and do things while not falling over (much).

Almost all of the software that Team ViGIR used to control its ATLAS robot (named Florian) was ROS based. Stefan credits both the team's prior experience with ROS, ROS's existing software base, and its vibrant community for why Team ViGIR made the choice to go with ROS from the very beginning. Controlling the ATLAS robot is exceedingly complex, and Stefan takes us through the software infrastructure that Team ViGIR used during the DRC; from basic perception to motion planning to manipulation and user interfaces.

With lots of pictures and behind-the-scenes videos, Stefan describes how Team ViGIR planned to tackle the challenging DRC Finals course. The team used both high-level autonomy and low-level control, with an emphasis on dynamic, flexible collaboration between robot and operator. Stefan narrates footage of both of Florian's runs at the DRC Finals; each was eventful, but we won't spoil it for you.

To wrap up his talk, Stephan describes some of the lessons that Team ViGIR learned through their DRC experience: about ROS, about ridiculously complex humanoid robots, and about participating in a global robotics competition.

ROSCon 2015 Hamburg: Day 1 - Stefan Kohlbrecher: An Introduction to Team ViGIR's Open Source Software and DRC Post Mortem from OSRF on Vimeo.

Next up: Gary Servin of Creativa77 Check out last week's post: Mark Shuttleworth of Canonical
Cross posted from the OSRF Blog

In 2004, Canonical released the first version of Ubuntu, a Debian-based open source Linux OS that provides one of the main operational foundations of ROS. Canonical's founder, Mark Shuttleworth, was CEO of the company until 2009, when he transitioned to a leadership role that lets him focus more on product design and partnerships. In 2002, Mark spent eight days aboard the International Space Station, but that was before the ISS was home to a ROS-powered robot. He currently lives on the Isle of Man with 18 ducks and an occasional sheep. Ubuntu was a platinum co-sponsor of ROSCon 2015, and Mark gave the opening keynote on developing a business in the robot age.

Changes in society and business are both driven by changes in technology, Mark says, encouraging those developing technologies to consider the larger consequences that their work will have, and how those consequences will result in more opportunities. Shuttleworth suggests that robotics developers really need two things at this point: a robust Internet of Things infrastructure, followed by the addition of dynamic mobility that robots represent. However, software is a much more realistic business proposition for a robotics startup, especially if you leverage open source to create a developer community around your product and let others innovate through what you've built.

To illustrate this principle, Mark shows a live demo of a hexapod called Erle-Spider, along with a robust, high-level 'meta' build and packaging tool called Snapcraft. Snapcraft makes it easy for users to install software and for developers to structure and distribute it without having to worry about conflicts or inter-app security. The immediate future promises opportunities for robotics in entertainment and education, Mark says, especially if hardware, ROS, and an app-like economy can come together to give developers easy, reliable ways to bring their creations to market.

ROSCon 2015 Hamburg: Day 1 - Mark Shuttleworth: Commercial models for the robot generation from OSRF on Vimeo.

Next up: Stefan Kohlbrecher of Technische Universitaet Darmstadt Check out last week's post: OSRF's Brian Gerkey

ROSCon Program Video - Brian Gerkey

| No Comments | No TrackBacks

Cross posted from the OSRF Blog

ROSCon is an annual conference focused on ROS, the Robot Operating System. Every year, hundreds of ROS developers of all skill levels and backgrounds, from industry to academia, come together to teach, learn, and show off their latest projects. ROSCon 2015 was held in Hamburg, Germany. Beginning today and each week thereafter, we'll be highlighting one of the talks presented at ROSCon 2015.

Brian Gerkey (OSRF): Opening Remarks

Brian Gerkey is the CEO of the Open Source Robotics Foundation, which oversees core ROS development and helps to coordinate the efforts of the ROS community. Brian helped found OSRF in 2012, after directing open source development at Willow Garage.

Unless you'd like to re-live the ROSCon Logistics Experience, you can skip to 5:10 in Brian's opening remarks, where he provides an overview of ROSCon attendees and ROS user metrics that shows how diverse the ROS community has become. Brian touches on what's happened with ROS over the last year, along with the future of ROS and OSRF, and what we have to look forward to in 2016. Brian also touches on DARPA's Robotics Fast Track program, which has a submission deadline of January 31, 2016.

ROSCon 2015 Hamburg: Day 1 - Opening Remarks from OSRF on Vimeo.

Next up, Mark Shuttleworth from Canonical.
SV-ROS's team Maxed-Out earns the highest score at IROS 2014 in the first Microsoft Kinect Challenge.

The Microsoft Kinect Challenge is a showcase of BRIN (Benchmark Indoor Robot Navigation), a scoring software that was used to score the competition. Each team had to create a mapping and autonomous navigation software solution that would successfully run on a provided Adept Pioneer 3DX robot

The number of way points achieved, time and accuracy are combined in determining a contestant's score. Microsoft Research's Gershon Parent, the author of the BRIN scoring software hopes to see BRIN as a universally accepted way of benchmarking autonomous robots' indoor navigation ability. 

SV-ROS is a Silicon Valley ROS users group that meets on the second to last Wednesday each month at the HackerDojo in Mountain View, CA. Team Maxed-Out is led by Greg Maxwell; key team members are Girts Linde, Ralph Gnauck, Steve Okay, and Patrick Goebel. The Maxed-Out effort began in May 2014 and was able to successfully create a winning ROS mapping localization and navigation solution in a few months, beating 5 other international teams. 

Maxed-Out's winning software solution was based on the ROS Hydro distribution on a powerful GPU enabled laptop running Ubuntu 12.04 and Nvidia Cuda 6.0 8 GPU parallel processing software. The team was able to out score all the other teams by incorporating the Rtabmap mapping, localization, navigation and new Point Cloud solution library that is the effort of Mathieu Labbe, a graduate student at the Université de Sherbrooke.

Team Maxed-Out's code is up at SV-ROS's Github repository and documented on this meetup page.

Pictures of the event are posted here

Find this blog and more at

Please submit content to be reviewed by emailing

Monthly Archives

About this Archive

This page is an archive of recent entries in the report category.

papers is the previous category.

repositories is the next category.

Find recent content on the main index or look in the archives to find all content.