February 2016 Archives

From osrfoundation.org

PX4 is a flight control software stack for autonomous aerial robots that describes itself as "rocket science straight from the best labs, powering anything from racing to cargo drones." One of these labs is at ETH Zurich, where Roman Bapst serves on the faculty. Bapst works on computer vision and actively contributes to the PX4 autopilot platform.

Bapst starts out by describing some of the great things about the PX4 autopilot: it's open source, open hardware, and supported by the Linux Foundation's Dronecode Project, which provides a framework under which developers can contribute to an open source standard platform for drones. PX4 runs on 3DRobotics' Pixhawk hardware, and once you hook up some sensors and servos, it will autonomously pilot nearly anything that flies - from conventional winged aircraft to multicopters to hybrids.

One of PX4's unique features is its modularity, which is fundamentally very similar in structure to ROS. This means that you can run PX4 modules as ROS nodes, while taking advantage of other ROS packages under PX4 to do things like vision based navigation and control. Additionally, it lets you easily simulate PX4-based drones within Gazebo, which, unlike real life, has a free reset button that you can push after a crash.

The PX4 team is currently getting their software modules running as ROS nodes on Qualcomm's Snapdragon Flight drone development platform, which would be a very capable and (affordable) way of getting started with a custom autonomous drone.

ROSCon 2015 Hamburg: Day 1 - Roman Bapst: ROS on DroneCode Systems from OSRF on Vimeo.

Next up: Morgan Quigley of OSRF Check out last week's post: Gary Servín of Ekumen

ROS Live Community Meeting Thursday

| No Comments | No TrackBacks
The time for our next ROS Live meeting on Robot Description Formats has almost come!

The meeting will be held this Thursday at 11AM PST/2PM EST/7PM UTC for 1 hour (see time converter). You may want to show up 10-15 minutes early so that we can start on time.

We will use Mumble voice chat for the meeting.

Before the call, please download the Mumble client:

Linux: sudo apt-get install mumble
OSX and Windows: download the binary from https://github.com/mumble-voip/mumble/releases

Then open the client and configure your voice chat setting to "Push To Talk" and make sure you remember the hotkey. We also suggest you disable the "Text to Speech" option, unless you enjoy hearing a robotic voice read every text chat that comes in out load.

To connect to our server, use the following info:
Port: 64738

If you want others to recognize who you are, use your real name or your Github username for your username.

If you don't have a working microphone, you can still connect via text chat in the client.

We will record the conversation and make it publicly available after the meeting.

Make sure you've prepared your questions, comments, and concerns about the present and future state of Robot Description Formats in ROS and Gazebo.

Open Position at ETH Zurich

| No Comments | No TrackBacks
From Augusto Gandia via ros-users@

Here this Great Open Position in Zürich, at the ETH University to work together with  a Research Team in Digital Fabrication. (Building with Robots).

It is a very good salary and it consist in developing a Robotics Simulation Platform. The new Robotic Fabrication Laboratory is involved. http://gramaziokohler.arch.ethz.ch/web/e/forschung/186.html


Here is the full position:




Please, feel free to ask anything or to share with your contacts.


Robotics Engineers for mobile robot at Amy Robotics

| No Comments | No TrackBacks

Amy Robotics is an innovative company focused on service robots that enhances quality of life through robotic technologies, products and services. 

Our team is based in Hangzhou, China. We are developing autonomous service robots for that assistant people in everyday living and work. We need some help to improve our development process and get our robot shipped soon. We are looking for multiple experienced roboticists to work on mobile robot navigation and computer vision for our service robots. 
Position 1: 
As a robotics engineer, you will be involved in designing, implementing and testing systems for mapping, planning and localisation, context awareness. Excellent candidate will lead the research and development of our robot navigation in its environments. We have many challenging problems and will give you independence and flexibility to address them to create a complete product experience. 
From Ricky Li via ros-users@


- Solid knowledge of mobile robot navigation theory. 
- Proficiency with C++, Python, and Linux 
- Hands on experience in ROS development in a Linux environment 
- Experience with robots and sensor systems in the real world 
- Experience in Android development is a big plus.   
- Ph.D. or MSc (with 3 year experience ), BSc (with 5 years experience) in robotics or related field 

Position 2: 

As a Machine Vision Scientist you will lead the research and development of our robot's image-based understanding of its environment. Specifically you will be responsible for developing and testing tools and algorithms in areas such as: 

Detection and recognition of people and object 
Place recognition 
Motion detection (of things in the environment) while stationary and while driving 
Feature tracking 
Image stabilization 
Object detection and tracking 
Visual odometry etc. 

-Expert knowledge in C++ 
-Experience working with OpenCV 
-Experience applying machine learning to real-world vision problems 
-At least 3 years of experience designing, implementing, and tuning computer vision algorithms 
-M.S./Ph.D. or B.S. and 5 years experience in computer science or related fields 

Desirable skills: 
-Experience with deep learning algorithms or toolkits 
-Experience with sensor fusion or multimodal perception 
-Experience with embedded hardware development 
-Experience with Python 
-Experience with GPU computing. 
-Experience in Android development is a big plus.   

If you are interested in creating sophisticated robot, or building a company and have a strong desire to make difference in robot revolution, we would like to hear from you. 
Please submit resume, letter of motivation, and (a link to) any supporting materials (personal profile, open source contribution, project etc.) by email to Ricky Li : lirj

Gary Servín (Ekumen): ROS android_ndk: What? Why? How?

| No Comments | No TrackBacks

What's ROS android_ndk? Why should you care? How does it work? Gary Servín is with Ekumen, an engineering and software consulting company based in Buenos Aires, Argentina that specializes in ROS, web and Android development. With the backing of Qualcomm and OSRF, Ekumen has been trying to make it both possible and easy to run ROS applications on small Android devices like tablets and cellphones using Android's native language development kit.

As Gary explains, the increasing performance, decreasing cost, and overall ubiquity of Android devices make them ideal brains for robots. The tricky part is getting ROS packages to play nice with Android, which is where ROS android_ndk comes in: it's a set of scripts that Ekumen is working on to make the process much easier. Unlike rosjava, ROS android_ndk gives you access to 181 packages from the desktop variant of ROS, with the ability to run native ROS nodes directly.

Ekumen is actively working on this project, with plans to incorporate wrappers for rosjava, actionlib implementation, and support for ROS 2. In the meantime, there's already a set of tutorials on ROS.org that should help you get started.

ROSCon 2015 Hamburg: Day 1 - Gary Servin: ROS android_ndk: What? Why? How? from OSRF on Vimeo.

Next up: Lorenz Meier & Roman Bapst of ETH Zurich and PX4 Check out last week's post: Stefan Kohlbrecher of Technische Universitaet Darmstadt

Two ROS Summer Schools by FH Aachen

| No Comments | No TrackBacks

From Patrick Wiesen

After four ROS Summer Schools in Aachen, we are happy to announce now our first export ROS Summer School at the Tshwane University of Technology in Pretoria.


It will be a one week event from 07th March 2016 to 12th March 2016 ending up in a competition using the mobile robots developed at FH Aachen. The common mobile robotic topics using ROS will be covered: ROS Basics, Communication, Transforms, Hardware Interfacing, Teleoperation, Landmark Detection, Localization, Mapping and a lot more. After theoretic lectures, we will continue with our hands-on hardware workshops using ROS on the robots as usual.



And because we never gonna stop, we are offering another ROS Summer School at our own university FH Aachen this year again. The event is planned from 15th August till 26th August 2016. Students in the field of Robotics, Mechatronics, Computer Science and Mechanical Engineering or everyone else interested in learning the basic skills of ROS is invited to register now! In this two weeks we are handling the following topics of mobile robotics more in detail:

ROS Basics, Communication, Hardware Interfacing, Teleoperation, Transforms, Gazebo Simulation, Landmark Detection, Localization, Mapping, Navigation, Control, some Industrial exhibition and so on and so on. . . Of course all this topics can be experienced on real hardware using our mobile robots after learning the theory.

And if this is still not enough for you, we offer an additional ROS UAV weekend afterwards from 27th to 28th of August. This will include assembling UAVs, first flight setup, flight modes, ROS interfacing, Landmark Detection and getting in touch with autonomous flying. Feel free to choose this option in our application form. Application form, more information, photos and videos can be found on our homepage:



All is organized by MASCOR.  The ROS Summer School is designed to teach participants about how to get started with ROS; it is created for those who have had an interest in autonomous systems, but didn't quite know how to get started. With that, organizers recommend students have a basic knowledge of Linux (Ubuntu) and one programming language such as Python or C++. The two-week program is made possible through Mobile Autonomous Systems and Cognitive Robotics (MASCOR), and key players including Prof. Walter Reichert, Prof. Stephan Kallweit, Prof. Alexander Ferrein, and Prof. Ingrid Scholl.

February ROS Live meetup details announced

| No Comments | No TrackBacks
From David Lu!! via ros-users@

The time and topic for the February ROS Live meetup are set. The
meeting will be Thursday February 25th, 2016 at 2pm EST [1] and will
be about Robot Description Formats.

Thank you everyone for voting. Future meetings will likely focus on
other highly ranked topics.

Further details about how to participate will be released next week.
In the meantime, you can help shape the conversation by visiting the
Github page[2] and adding topics you'd like to hear discussed, gripes
you have about URDF or other things to spur the conversation.

David Lu!!
ROS Community Promoter Person (ROSCPP)
BossaNova.com / MetroRobots.com

P.S. As it turns out, trying to schedule this meeting was a massively
over-constrained problem. There were no perfect solutions, and most
options were unavailable for about half the respondents.
P.P.S. Due to possible limitations in the number of live participants
on the call, please let me know if there are special circumstances for
you being on the live call. As mentioned earlier, there will be a way
to contribute questions if you're watching the live stream.

[1] http://www.timeanddate.com/worldclock/fixedtime.html?msg=ROS+Live+February&iso=20160225T14&p1=418&ah=1
[2] https://github.com/DLu/ros_live/wiki/Robot-Description-Formats
[3] http://tinyurl.com/roslivefebruary

Announcement of ROS In-hand scanner

| No Comments | No TrackBacks
From Patrick Wiesen via ros-users@

I'd like to announce my first ROS package - the ROS In-hand scanner. It is a 3D Scanning application based on the PCL In-hand scanner for small objects by Martin Sälzle and the PCL developers.

Until now it was only possible to use OpenNI based sensors. I implemented a standard ROS interface for PointCloud2 message type.

My first results using Intel RealSense can be seen in the following video:

For the future it could be possible to create a RVIZ plugin, implement a global registration or generate watertight meshes for 3D printing applications.

You find the source code on GitHub:

A ROS wiki page including some instructions on how to use will follow soon hopefully.

For the moment have fun with scanning!
Originally posted on osrfoundation.org

Stefan's early research using tiny soccer-playing humanoid robots at Technische Universität Darmstadt in Germany prepared him well for software development on much larger humanoid robots that don't play soccer at all. From the Darmstadt Dribblers RoboCup team to the DARPA Robotics Challenge Team ViGIR, Stefan has years of experience with robots that need to walk on two legs and do things while not falling over (much).

Almost all of the software that Team ViGIR used to control its ATLAS robot (named Florian) was ROS based. Stefan credits both the team's prior experience with ROS, ROS's existing software base, and its vibrant community for why Team ViGIR made the choice to go with ROS from the very beginning. Controlling the ATLAS robot is exceedingly complex, and Stefan takes us through the software infrastructure that Team ViGIR used during the DRC; from basic perception to motion planning to manipulation and user interfaces.

With lots of pictures and behind-the-scenes videos, Stefan describes how Team ViGIR planned to tackle the challenging DRC Finals course. The team used both high-level autonomy and low-level control, with an emphasis on dynamic, flexible collaboration between robot and operator. Stefan narrates footage of both of Florian's runs at the DRC Finals; each was eventful, but we won't spoil it for you.

To wrap up his talk, Stephan describes some of the lessons that Team ViGIR learned through their DRC experience: about ROS, about ridiculously complex humanoid robots, and about participating in a global robotics competition.

ROSCon 2015 Hamburg: Day 1 - Stefan Kohlbrecher: An Introduction to Team ViGIR's Open Source Software and DRC Post Mortem from OSRF on Vimeo.

Next up: Gary Servin of Creativa77 Check out last week's post: Mark Shuttleworth of Canonical
From Richard Pollock

GeoDigital's innovative Autonomous Driving team is reshaping how geospatial data
are acquired and interpreted, and the way that road vehicles use the interpretation
results. This involves LIDAR and image sensing, spatial databases, photogrammetry,
GNSS, inertial sensing, machine vision, machine learning, and embedded system development.

GeoDigital is recuiting a full-time senior-level and a full-time intermediate-level
software developer, both to work in our Lompoc, California, 93436 USA office.
The work for these positions will include participation in the development of the

- software tools to increase the efficiency of GeoDigital's data interpretation activities

- embedded software systems for route feature data management

- embedded software systems for vehicle localization refinement

- techniques for updating route feature data and distributing updates to user vehicles

Development software systems used internally in this work include the Point Cloud Library (PCL),
the Robot Operating System (ROS), OpenCV, and CUDA.

Benefits of working for GeoDigital:

- Comprehensive medical, vision and dental coverage, with employer contribution to HSA.

- Company paid Life Insurance, ADD, Short Term Disability and Long Term Disability.

- Company contribution to 401k.

- Flexible scheduling.

-Collaborative team-oriented working environment.  

Senior-level software developer position qualifications:

- university degree in an engineering or science field with a computing emphasis.

- a minimum of 7 years of industrial software development experience with steadily increasing
  responsibilities, or a research-based graduate degree and a minimum of 5 years of industrial
  software development experience with steadily increasing responsibilities.

- expert-level C, C++, and Python programming skills.

- familiarity with development toolchains on Windows and Linux platforms.

- experience in the selection and application of techniques from one or more of the
  following fields: machine vision, point cloud processing, photogrammetry, computational
  geometry, machine learning.

- working knowledge of terrestrial coordinate systems.

Intermediate-level software developer position qualifications:

- university degree in in an engineering or science with a computing emphasis.

- a minimum of 5 years industrial software development experience with steadily increasing

- expert-level C, C++, and Python programming skills

- familiarity with development toolchains on Windows and Linux platforms

For both positions, experience with one or more of ROS, PCL, OpenCV, or CUDA is desirable.

To apply for either position, please send your resume to gayle@nimbushrsolutions.com
or visit our website at www.geodigital.com/careers.

Driverless Development Vehicle with ROS Interface

| No Comments | No TrackBacks
Choose either the Lincoln MKZ or Ford Fusion as a development vehicle.

Full control of
  • throttle
  • brakes
  • steering
  • shifting
  • turn signals
Read production sensor data such as
  • gyros
  • accelerometers
  • gps
  • wheel speeds
  • tire pressures

There are no visual indications that the production vehicle has been modified. All electronics and wiring are hidden.

From David Scaramuzza via ros-users@

We are happy to release an open source implementation of our approach for real-time, monocular, dense depth estimation, called "REMODE".

The code is available at: https://github.com/uzh-rpg/rpg_open_remode

It implements a "REgularized, probabilistic, MOnocular Depth Estimation", as described in the paper:

M. Pizzoli, C. Forster, D. Scaramuzza
REMODE: Probabilistic, monocular dense reconstruction in real time
IEEE International Conference on Robotics and Automation (ICRA), pp. 2609-2616, 2014

The idea is to achieve real-time performance by combining Bayesian, per-pixel estimation with a fast regularization scheme that takes into account the measurement uncertainty to provide spatial regularity and mitigate the effect of noise.
Namely, a probabilistic depth measurement is carried out in real time for each pixel and the computed uncertainty is used to reject erroneous estimations and provide live feedback on the reconstruction progress.
The novelty of the regularization is that the estimated depth uncertainty from the per-pixel depth estimation is used to weight the smoothing.

Since it provides real-time, dense depth maps along with the corresponding confidence maps, REMODE is very suitable for robotic applications, such as environment interaction, motion planning, active vision and control, where both dense information and map uncertainty may be required.
More info here: http://rpg.ifi.uzh.ch/research_dense.html

The open source implementation requires a CUDA capable GPU and the NVIDIA CUDA Toolkit.
Instructions for building and running the code are available in the repository wiki.
Cross posted from the OSRF Blog

In 2004, Canonical released the first version of Ubuntu, a Debian-based open source Linux OS that provides one of the main operational foundations of ROS. Canonical's founder, Mark Shuttleworth, was CEO of the company until 2009, when he transitioned to a leadership role that lets him focus more on product design and partnerships. In 2002, Mark spent eight days aboard the International Space Station, but that was before the ISS was home to a ROS-powered robot. He currently lives on the Isle of Man with 18 ducks and an occasional sheep. Ubuntu was a platinum co-sponsor of ROSCon 2015, and Mark gave the opening keynote on developing a business in the robot age.

Changes in society and business are both driven by changes in technology, Mark says, encouraging those developing technologies to consider the larger consequences that their work will have, and how those consequences will result in more opportunities. Shuttleworth suggests that robotics developers really need two things at this point: a robust Internet of Things infrastructure, followed by the addition of dynamic mobility that robots represent. However, software is a much more realistic business proposition for a robotics startup, especially if you leverage open source to create a developer community around your product and let others innovate through what you've built.

To illustrate this principle, Mark shows a live demo of a hexapod called Erle-Spider, along with a robust, high-level 'meta' build and packaging tool called Snapcraft. Snapcraft makes it easy for users to install software and for developers to structure and distribute it without having to worry about conflicts or inter-app security. The immediate future promises opportunities for robotics in entertainment and education, Mark says, especially if hardware, ROS, and an app-like economy can come together to give developers easy, reliable ways to bring their creations to market.

ROSCon 2015 Hamburg: Day 1 - Mark Shuttleworth: Commercial models for the robot generation from OSRF on Vimeo.

Next up: Stefan Kohlbrecher of Technische Universitaet Darmstadt Check out last week's post: OSRF's Brian Gerkey
From Anis Koubaa via ros-users@

I am happy to announce the call for chapters for the Springer Book on Robot Operating System (ROS) Volume 2 is now open. 

The book will be published by Springer. 

We look forward to receiving your contributions to make this book successful and useful for ROS community. 

In Volume 1, we accepted 27 chapters ranging from beginners level to advanced level, including tutorials, case studies and research papers. The Volume 1 is expected to be released by Feb 2016.
After negotiation with Springer, the authors have benefited of around 80% of discount on hardcopies as an incentive to their contribution, in addition to publishing their work. 

The call for chapters website (see above) presents in detail the scope of the book, the different categories of chapters, topics of interest, and submission procedure. There are also Book Chapter Editing Guidelines that authors need to comply with. 

In this volume, we intend to make a special focus on unmanned aerial vehicle using ROS. Papers that present the design of a new drone and its integration with ROS, simulation environments of unmanned aerial vehicle with ROS and SITL, ground station to drone communication protocols (e.g. MAVLink, MAVROS, etc), control of unmanned aerial vehicles, best practices to work with drones, etc. are particularly sought.

In a nutshell, abstracts must be submitted by February 15, 2016 to register the chapters and to identify in advance any possible similarities of chapter contents. Full chapters submission is due on April 20, 2016.
Submissions and the review process will be handle through EasyChair. Link will be provided soon.

Each chapter will be reviewed by at least three expert reviewers, one at least should be a ROS user and/or developer. 

Want to be a reviewer for some chapters?
We look for the collaboration of ROS community users to provide reviews and feedback about proposals and chapters to be submitted for the book. If you are interested to participate in the review process, please consider filling in the following reviewer interest form

We look forward to receiving your contribution for a successful ROS reference!
From Paul Hvass

ROS-I Banner.jpg

The ROS-Industrial Consortium Americas Annual Meeting will be held March 3-4 at Southwest Research Institute headquarters in San Antonio, Texas. Demonstrations are open to the public on March 3 for registered attendees, and will include Scan-N-Plan robotic automation, a mobile manipulator for order fulfillment, and more. Come and learn more about the design of a four-story tall laser coating removal mobile robot from Jeremy Zoss, the lead engineer behind the project who will give the keynote address. On March 4 consortia members will convene to hear updates from ROS-I community leaders in the US, Europe, and Asia. At lunch, Erik Nieves, the CEO of PlusOne Robotics, will present his vision for the future of robotics (keynote). Then the Consortium will provide input to build a roadmap for 2016, and will learn more about the progress and plans for the latest focused technical projects.

Interested in being part of the open source industrial robotics community? 
Register online or to view the agenda, visit rosindustrial.org events page.

Work on driverless cars at Cruise Automation

| No Comments | No TrackBacks
From Richard Ni via ros-users@

Come work with a team of robotics experts on technically challenging problems, building products that improve lives and prevent car accidents. 

Our team is small, but we move quickly. Last year, we built prototype vehicles that have logged over 10,000 autonomous miles on California highways, and we're now working on some more exciting stuff.

In particular, we're looking for perception engineers to make sure our cars can accurately identify and track objects. Apply at https://jobs.lever.co/cruise/a2499312-3804-47d7-aad8-12c70228c4e2?lever-source=rw

For a complete list of our openings, see https://jobs.lever.co/cruise
From Emily Spady via ros-users@

We're Marble - a scrappy early-stage robotics startup based in San Francisco that designs, builds, and operates robots for last mile logistics - and we're looking for one of our first core robotics software engineers.

You are joining very early and will have a huge amount of responsibility, impact, and room for growth. You must be able to move fast and get things done. Expect to be mostly in ROS writing C++ with a healthy amount of scripting in python and/or node. You should be versed in perception, navigation/path-planning, and state estimation of mobile robots. Experience with deployed outdoor robots is a huge bonus - expect to spend a fair bit of time in the streets with us (and the robot, of course).

If you think you're an awesome fit, apply here:

Find this blog and more at planet.ros.org.

Monthly Archives

About this Archive

This page is an archive of entries from February 2016 listed from newest to oldest.

January 2016 is the previous archive.

March 2016 is the next archive.

Find recent content on the main index or look in the archives to find all content.