Towards ROS-native drones

From Medium:

Announcing alpha support for the PX4 flight stack in a path towards drones that speak ROS natively

The drones field is an interesting one to analyze from a robotics perspective. While capable flying robots are reasonably new, RC-hobbyists have been around for a much longer time building flying machines developing communities around the so called flight stacks or software autopilots.

Among these, there're popular options such as the Paparazzi, the APM (commonly known as ardupilot) or the PX4. These autopilots matured to the point of acquiring autonomous capabilities and turning these flying machines into actual drones. Many of these open source flight stacks provide a general codebase for building basic drone behaviors however modifications are generally needed when one has the intention of tackling traditional problems in robotics such as navigation, mapping, obstacle avoidance and so on. These modifications are not straightforward when performed directly in the autopilot code thereby, in an attempt to enhance (or sometimes just simplify) the capabilities of autopilots, abstraction layers such as DroneKit started appearing.

For a roboticist however, the common language is the Robot Operating System (ROS). Getting ROS to talk to these flight stacks natively would require a decent amount of resources and effort thereby, generally, roboticists use a bridge such as the mavros ROS package to talk to the flight stacks.

We at Erle Robotics have been offering services with flying robots using such architecture but we've always wondered what would be the path towards a ROS-native drone. In order to explore this possibility we've added support for the PX4 Pro flight stack. 


Supporting the PX4 Pro flight stack

The PX4 Pro drone autopilot is an open source (BSD) flight control solution for drones that can "fly anything from a racing to a cargo drone?--?be it a multi copter, plane or VTOL". PX4 has been built with a philosophy similar to ROS, composed by different software blocks where each one of these modules communicates using a publish/subscribe architecture (currently, a simplified pub/sub middleware called uORB).

In an internal attempt to research the path of getting ROS-native flight stacks and to open up this work to the community I'm happy to announce official alpha support for the PX4 Pro in all our products meant for developers such as the PXFmini, Erle-Brain 2 or Erle-Copter. Our team has put together a new set of Operating System images for our products that will help you switch between flight stacks easily.

To install PX4 Pro, just type the following:

sudo apt-get purge -y apm-* # e.g.: apm-copter-erlebrain 
sudo apt-get update 
sudo apt-get install px4-erle-robotics


ROS-native flight stacks 

Using the PX4 Pro flight stack as a starting point, our team will be dedicating resources to prototype the concept of a drone autopilot that speaks ROS natively, that is, that uses ROS nodes to abstract each submodule within the autopilot's logic (attitude estimator, position control, navigator, ...) and ROS topics/services to communicate with the rest of the blocks within the autopilot. Ultimately, this initiative should deliver a software autopilot capable of creating a variety of drones that merges nicely with all the traditional ROS interfaces that roboticists have been building for over a decade now. 

If you're interested in participating with this initiative, reach us out at http://erlerobotics.com/blog/contact/.

ROSCon 2016 Videos and Slides Posted

ROSCon was record-breaking in every way, with over 450 attendees and a 60% increase over last year in sponsorship.

Thanks to everyone for coming and for your support! And thank you to our sponsors for the financial support that enabled the conference to grow!

We're happy to announce that we've posted recordings of all the talks on the program. You can find them linked at: http://roscon.ros.org/2016/#program As well we have collected the slides from most of the presenters as well which are linked there as well.

If you would like to browse through the videos alone you can also find all 56 videos here: https://vimeopro.com/osrfoundation/roscon-2016

From Dave Coleman

Our second MoveIt! community meeting webcast will be on October 27th at 8am Pacific to discuss the latest developments and uses of MoveIt! around the world. Join us online to hear from research groups and industry on their perspectives of motion planning in ROS. Confirmed speakers include:

  • Recent Developments in MoveIt! - Dave Coleman
  • The Search-Based Planning Library (SBPL) - Dr. Maxim Likhachev and Andrew Dornbush
  • Updates from the Flexible Collision Checking Library (FCL) - Dr. Dinesh Manocha
  • Delft's Winning Amazon Picking Challenge Entry - Mukunda Bharatheesha
  • MoveIt! @ Fetch Robotics - Michael Ferguson
  • Q&A with original MoveIt! Developer & Founder - Dr. Ioan Sucan

Final agenda and details on how to join the AnyMeeting webcast will be sent out closer to the event. If you are interested in presenting your work to the community please contact me by October 21st.

Originally published in Medium:

1--XgoPd36umkXi6lXTGkCng.png

I'm delighted to announce a new game-changing standard for building robot components, H-ROS: the Hardware Robot Operating System. H-ROS provides manufacturers tools for building interoperable robot components that can easily be exchanged or replaced between robots.

H-ROS is about supporting a common environment of robot hardware components, where manufacturers comply with standard interfaces built upon ROS.

Powered by the popular Robot Operating System (ROS) and built with industry and developers in mind, H-ROS classifies robot components in 5 types: sensing?--?used to perceive the world, actuation?--?allow interaction with the environment, communication?--?provide a means of interconnection, cognition?--?the brain of the robot and hybrid?--?components that group together different sub-components under a common interface. This building-block-style parts come as reusable and reconfigurable components allowing developers, to easily upgrade their robots with hardware from different manufacturers and add new features in seconds.

Motivation and origin

Building a robot is accepted as a harsh task thereby it makes sense to reuse previous work to reduce this complexity. Unfortunately, nowadays there are little efforts that reuse hardware in both academy and industry. Robots are generally built by multidisciplinary teams (generally a whole research group or a company division) where different engineers get involved in the mechanical, electrical and logical design. Most of the time is spent dealing with the hardware/software interfaces and little is put into behavior development or real-world scenarios. Existing hardware platforms, although starting to become more common, lack extensibility.

Examples can be seen in several commercial and industrial robots that hit the market recently and already include a common software infrastructure (generally the Robot Operating System(ROS)) but lack of a hardware standard.

With H-ROS, building robots will be about placing H-ROS-compatible hardware components together to build new robot configurations. Constructing robots won't be restricted to a few with high technical skills but it will be extended to a great majority with a general understanding of the sensing and actuation needed in a particular scenario.

H-ROS was initially funded by the US Defense Advanced Research Projects Agency (DARPA) through the Robotics Fast Track program in 2016. It is now available for selected industry partners and will soon be released for the wider robotics community. Additional information can be requested through its official web page at https://h-ros.com/. H-ROS was first unveiled and showcased at ROSCon 2016 (October 8th-9th) in Seoul, South Korea.

Introducing Cartographer

From Damon Kohler, Wolfgang Hess, and Holger Rapp, Google Engineering

We are happy to announce the open source release of Cartographer, a real-time SLAM library in 2D and 3D with ROS support.

Cartographer builds globally consistent maps in real-time across a broad range of sensor configurations common in academia and industry. The following video is a demonstration Cartographer's real-time loop closure:


A detailed description of Cartographer's 2D algorithms can be found in our ICRA 2016 paper.

Thanks to ROS integration and support from external contributors, Cartographer is ready to use on several robot platforms with ROS support:

At Google, Cartographer has enabled a range of applications from mapping museums and transit hubs to enabling new visualizations of famous buildings.

We recognize the value of high quality datasets to the research community. That's why, thanks to cooperation with the Deutsches Museum (the largest tech museum in the world), we are also releasing three years of LIDAR and IMU data collected using our 2D and 3D mapping backpack platforms during the development and testing of Cartographer.


Our focus is on advancing and democratizing SLAM as a technology. Currently, Cartographer is heavily focused on LIDAR SLAM. Through continued development and community contributions, we hope to add both support for more sensors and platforms as well as new features, such as lifelong mapping and localizing in a pre-existing map.

More than 30,000 Questions on ROS Answers

We've reached another milestone for ROS Answers, 30,000 questions asked!

answers.ros.org_30000.png

The 30,000th question was asked Friday by @Mani who regularly helps answer others questions as well.

To see the many contributors to the site please view the list of users

Congratulations to the answers.ros.org community for making the site the thriving resource that it is today. Keep up the fantastic work, and keep the questions--and answers--coming.

With the awareness on the site. If you've asked a question and not marked it answered. Please consider revising it with more details or to add clarity. And likewise consider trying to answer one question each time you're on the site.

Grid Map Library

From Péter Fankhauser via ros-users@:

We'd like to announce our new Grid Map package, developed to manage two-dimensional grid maps with multiple data layers and designed for mobile robotic mapping in rough terrain navigation.

The package is available for ROS Indigo, Jade, and Kinetic and can be installed from the ROS PPA. After multiple development cycles and use in many projects, the library is well tested and stable.

Features:

  • Multi-layered: Developed for universal 2.5-dimensional grid mapping with support for any number of layers.

  • Efficient map re-positioning: Data storage is implemented as two-dimensional circular buffer. This allows for non-destructive shifting of the map's position (e.g. to follow the robot) without copying data in memory.

  • Based on Eigen: Grid map data is stored as Eigen data types. Users can apply available Eigen algorithms directly to the mapdata for versatile and efficient data manipulation.

  • Convenience functions: Several helper methods allow for convenient and memory safe cell data access. For example, iterator functions for rectangular, circular, polygonal regions and lines are implemented.

  • ROS interface: Grid maps can be directly converted to and from ROS message types such as PointCloud2, OccupancyGrid, GridCells, and our custom GridMap message.

  • OpenCV interface: Grid maps can be seamlessly converted from and to OpenCV image types to make use of the tools provided by OpenCV.

  • Visualizations: The grid_map_rviz_plugin renders grid maps as 3d surface plots (height maps) in RViz. Additionally, the grid_map_visualization package helps to visualize grid maps as point clouds, occupancy grids, grid cells etc.

Source code, documentation, and tutorials available at https://github.com/ethz-asl/grid_map

Originally published at the ROS Industrial blog:


This summer, Risto Kojcev, sponsored by the Google Summer of Code (GSOC) and directed by the Open Source Robotics Foundation (OSRF) and the ROS-Industrial (ROS-I) Consortium developed a user friendly ROS Interface to control and change a manipulator into Cartesian Impedance control mode. The external forces that the robot applies to the environment can also be set with the developed interface.

Risto shares:

Our first goal was to create a set of common messages containing the necessary parameters for setting Impedance and Force control. This allows interaction between the ROS ecosystem and the ROS driver of the robot. The messages are created based on the commonly used parameters for Impedance/Force control and discussion with the ROS community. The relevant current set of ROS messages are available in the majorana repository. I would also like to encourage the Robotics community to contribute to this project by sharing their suggestions. I believe that this set of messages could still be more generalized and improved based on community input.

The second goal was to develop a user interface which allows the user to set the necessary parameters for Cartesian Impedance/Force Control and interactively switch between control modes. In this case I have expanded previous GSoC 2014 Project: Cartesian Path Planner Plug-In for MoveIt!. The updated plugin now contains the relevant UI fields for setting Cartesian Impedance and Force Control. Depending on the implementation and the properties of the robot controller, this plugin also allows interactively switching between control modes during runtime.

Find this blog and more at planet.ros.org.


Please submit content to be reviewed by emailing ros-news@googlegroups.com.

Monthly Archives

Find recent content on the main index or look in the archives to find all content.