Recently in robots Category
Watch the flight in unstructured environments below.
Cross Posted from the Open Source Robotics Foundation Blog
Toyota's Human Support Robot, or HSR, will provide assistance to older adults and people with disabilities. A one-armed mobile robot with a telescoping spine, the HSR is designed to operate in indoor environments around people. It can reach the floor, tabletops, and high counters, allowing it to do things like retrieve a dropped object or put something away in its rightful place. An exemplar of the next generation of robot manipulators, the arm is low-power and slow-moving, reducing the chance of accident or injury as it interacts with people.
And it runs ROS. Dr. Yasuhiro Ota, Manager of the Toyota Partner Robot Program, tells us that the HSR runs ROS Fuerte [http://ros.org/wiki/fuerte] and uses a number of ROS packages, including: roscpp, rospy, rviz, tf, std_msgs, pcl, opencv. As for why they chose to use ROS, Dr. Ota says, "ROS provides an excellent software developmental environment for robot system integration, and it is also comprised of a number of useful ready-to-use functions."
There's exciting news out of Boston today with the launch of Rethink Robotics's new robot. Rethink Robotics is developing a family of low cost and highly intelligent robots that can perform simple tasks in a manufacturing environment, increasing the productivity of the people around them. Rethink Robotics was founded by Rodney Brooks, former Director of the MIT Computer Science & Artificial Intelligence Laboratory, and co-Founder of iRobot Corporation.
Rethink's robots can be taken out of the box, taught a task by anyone, and start work in a few hours, eliminating the need for systems integration. They are safe to interact with people at close range and are easy to train and retrain on the fly. They are nothing like any existing industrial robots.
While all of this is very exciting for the robotics industry, and certainly for our friends at Rethink, what we personally find most exciting is the role played by ROS in today's news. Rethink's new Baxter robot is, in the words of CEO Scott Eckert, "built upon ROS." We had some hint from Rethink's (then Heartland's) support of ROSCon 2012 that they were doing something with ROS, but we were very pleasantly surprised today to hear that ROS is such a central part of Baxter.
As ROS edges closer to its five-year anniversary, this is a great milestone for the ROS community. Rethink is actively hiring for a Senior Developer Relations Engineer with expertise in ROS, and expects that individual to play an important role as part of the ROS community.
Congratulations to everyone at Rethink Robotics, and we are looking forward to their contribution to the ROS community.
I am pleased to announce the release of the romeo stack for ROS. This stack is a joint work with François Keith.
Romeo is a 143-cm humanoid robot designed by Aldebaran Robotics.
Its full description is available here:
The romeo stack contains the URDF model of this robot and its associated meshes.
This robot is AFAIK the first humanoid robot whose full description is freely available in ROS (i.e. kinematic chain, dynamic information and meshes). We would like to thank Aldebaran Robotics for authorizing us to publish these data.
The package is still not yet complete (we are missing the hands, the eyes and the sensors position for instance) but we will be updating it as soon as possible. We wlll also provide SRDF and contact zones information (see rcpdf stack).
Guest post from Mikkel Rath Pedersen, Department of Mechanical and Manufacturing Engineering, Aalborg University
The autonomous industrial mobile manipulator "Little Helper" has been the focus of many research activities since the first robot was designed in 2008, at the Department of Mechanical and Manufacturing Engineering at Aalborg University, Denmark. The focus has always been on flexible automation, since this is paramount as production companies experience a shift from mass production to mass customization. An aim is to use existing, industrial hardware, and incorporating these components into a fully functioning industrial mobile manipulator.
Since the original design, the robot has been rebuilt several times. At the present time, the department has two versions of the Little Helper, at the two campuses of the department in Aalborg and Copenhagen. The two systems use the same hardware, the only differences being minor in the construction and electrical system.
Both systems include the following components:
- KUKA Light Weight Robot (LWR) arm (7DOF, integrated torque sensors in each joint)
- Neobotix MP-L655 differential drive platform, equipped with
- Two SICK S300 Professional laser scanners
- Five ultrasonic sensors
- Eight 12V batteries, yielding 152 Ah @ 24V total
- Schunk WSG-50 electrical parallel gripper
- Microsoft Kinect RGBD Camera
- Onboard ROS computer (workstation on one, laptop on the other)
A recent focus has been on the implementation of ROS on the entire system, in order to make the transition from vendor-specific communication protocols to something more general. This required the use of some existing packages, that were readily available on the ROS website, including the stacks for the Kinect camera (openni_camera and openni_tracker), and the Neobotix stacks (neo_driver, neo_common and neo_apps) that were recently made available by Neobotix. However, much work has also gone into creating ROS packages for communicating with the KUKA LWR (through the Fast-Research Interface available with the robot arm) and the Schunk gripper.
The goal of some current and future research projects are:
- modular architectures for mobile manipulators,
- task-level programming using robot skills,
- gesture-based instruction of mobile manipulators, and
- mission planning and control
The Little Helper is involved in the EU-FP7 projects TAPAS and GISA(ECHORD).
- PhD Student Mikkel Rath Pedersen, email@example.com
- PhD Student Carsten Høilund, firstname.lastname@example.org
- Postdoc Simon Bøgh, email@example.com
- Postdoc Mads Hvilshøj, firstname.lastname@example.org
- Professor Ole Madsen, email@example.com
- Associate Professor Volker Krüger, firstname.lastname@example.org
Guest post from Simon Roder from University of Bern, Institute for Surgical Technology & Biomechanics
The goal of our research is the development of a precision approach for minimally invasive hearing aid implantations. Our approach centers around a imaged-guided surgical robot system, capable of drilling a direct tunnel access (diameter 1.2 mm) from the outside of the skull, through the Temporal Bone into the middle ear. The drill trajectory is planned using high-resolution cone beam computer tomography. The deviation between the planned and the actual drill trajectory shall be less then 0.5 mm in order to avoid damaging sensible nerves within the temporal bone.
To achieve such accuracy, our system consists of a specifically developed robotic manipulator with a 5 DOF serial kinematic, guided by an optical tracking system with a tracking accuracy of 20 microns. The robot weights around 5 kg and can thus be mounted directly to an OR table. It comprises a sensitive force-torque sensor in its tool tip and a electromyography (EMG) sensor integrated in the instrument tip. The surgeon controls the system functionalities by means of a graphical user interface and the robot system itself through haptic force feedback.
The robot system together with available patient data is modeled using ROS to observe the robots movements, its sensors and possible collisions in real-time. The model is updated and visualized using Rviz running on a dedicated client computer connected via CAN to the robot control system.
Kirkland, WA - January 31, 2012 - CoroWare, Inc. (COWI.OB), today announced a new upgrade offer for existing CoroBot® Classic and CoroBot Explorer unmanned ground vehicle (UGV) customers. These upgrades will help bring earlier CoroBot UGV models up to date, and will enable a new class of CoroBot applications based on ROS from Willow Garage.
The ROS software platform is rapidly becoming the standard for open robotics development, and has a large and active developer community. CoroWare's ROS Upgrade Program will help its customers migrate their existing CoroBot UGV platforms, which is based on Linux and Player software distributions, to Robot Operating System (ROS), which has been deployed on unmanned ground vehicles, air vehicles, and surface vehicles around the world.
"Willow Garage is delivering the ROS software platform with which vendors, such as CoroWare, can provide affordable and open mobile robot platforms that robot scientists need for prototyping robotics applications," said Brian Gerkey of Willow Garage. "CoroWare's announcement today will help grow the community of robotics researchers and educators who are building applications based on ROS."
CoroWare's ROS Upgrade Program includes an initial assessment of the CoroBot that the customer purchased. For some customers, software upgrades will only be required and will be free of charge. For other customers who purchased older CoroBot models, hardware upgrades may be required and will be priced accordingly.
"CoroWare's ROS Upgrade Program will give our customers a greater choice of ROS-based applications and software modules to run on their existing CoroBot platforms", said Andrew Zager, product marketing engineer at CoroWare. "Because ROS is not limited to any robotics platform, we look forward to migrating any third party mobile robots and applications to ROS in the future."
CoroWare's ROS Upgrade Program for all CoroBot platforms is available now. Customers may get further details by visiting our website at robotics.coroware.com; or sending e-mail inquiries to email@example.com, or contacting us at 1-800-641-2676, option 1.
The Raven II is helping the open-source community advance the state of the art in surgical robotics. In a joint venture between the University of Washington and UC Santa Cruz, the National Science Foundation funded the development of seven identical Raven II surgical robots. Each system has a two-armed surgical robot, a guiding video camera, and a surgeon-interface system built on top of ROS.
These surgical robots are linked via the Internet so researchers can easily share new surgical robotics research and developments. Five Raven II robots are being given to major medical facilities at Harvard University, Johns Hopkins University, the University of Nebraska, UC Berkeley, and UCLA.
According to Blake Hannaford at the University of Washington:
"These are the leading labs in the nation in the field of surgical robotics, and with everyone working on the same platform, we can more easily share new developments and innovations."
Weeding in organic orchards is a tedious process done either mechanically or by weed burning. Researchers at University of Southern Denmark and Aarhus University created the ASuBot (Aarhus and Southern Denmark University Robot), a self-driving tractor, to handle navigate around trees in organic orchards. Weeding is done using gas burners that the ASuBot makes sure is not damaging the trees.
ASuBot is built on a Massey Fergusson 38-15 garden tractor outfitted with a SICK laser range finder and Topcon AES-25 steering. It is able to navigate autonomously without the use of GPS antennas, which would not work under shaded trees and would also make the robot more costly.
The FroboBox, the ASuBot's on-board computer, is a Linux-powered computer running the FroboMind software that runs on top of ROS. FroboMind provides a common, conceptual architecture for field robots and has already been integrated with five different platforms.
For more information about ASuBot and FroboMind, please see fieldrobot.dk.
The Bilibot developers have been busy with their Create+Kinect platform and have two new things to share. First, there's a new video (above) that shows off their "Developer Edition", including the brand-new arm. Second, they're offering a rebate of up to $350 if you release original, innovative applications for Bilibot to the rest of the community. It's a great incentive to put the platform in the hands of more developers and encourage collaboration in the community.
For more information, please visit Bilibot.com.
Coroware has announced support for ROS on their CoroBot and Explorer mobile robots. They will be supporting ROS on both Ubuntu Linux and Windows 7 for Embedded Systems and plan to start shipping with ROS in the second quarter of this year.
"CoroWare's research and education customers are asking for open robotic platforms that offer a freedom of choice for both hardware and software components," said Lloyd Spencer, President and CEO of CoroWare. "We believe that ROS will futher CoroWare's commitment to delivering affordable and flexible mobile robots that address the needs of our customers worldwide."
In order to get their users up and running on ROS, Coroware will be using hosting a "ROS Early Adopter Program" using their CoroCall HD videoconferencing system.
Users of Micro Air Vehicles (MAVs) will be happy to hear that the MAVLink developers have released software for ROS compatibility. MAVLink is a lightweight message transport used by more than five MAV autopilots and also offers support for two Ground Control Stations. This broad autopilot support allows ROS users to develop for multiple autopilot systems interchangeably. MAVLink also enables MAVs to be controlled from a distance: if you are out of wifi range, MAVLink can be used with radio modems to retain control up to 8 miles.
MAVLink was developed in the PIXHAWK project at ETH Zurich, where it is used as main communication protocol for autonomous quadrotors with onboard computer vision. MAVLink can also be used indoors on high-rate control links in systems like the ETH Flying Machine Arena.
MAVLink is compatible with two Ground Control Stations: QGroundControl and HK Ground Control Station. Ground Control Stations allow users to visualize the MAV's position in 3D and control its flight. Waypoints can be directly set in the 3D map to plan flights. You can customize the layout of QGroundControl to fit your needs, as shown in this video:
MAVLink is by now used by several mainstream autopilots:
- ArduPilotMega (main protocol)
- pxIMU Autopilot (main protocol)
- SLUGS Autopilot (main protocol)
- FLEXIPILOT (optional protocol)
- UAVDevBoard/Gentlenav/MatrixPilot (initial support)
For more information:
Guest post from Urko Esnaola of Tecnalia
Tecnalia and Pukas have cooperated to integrate sensors in a high-performance surfboard to record data of relevant surfing parameters in real operation -- while surfing waves.
The aim of the project is to have information about "what's going on" in a high-performance surfboard while a surfer is riding the board. This will help: (i) surfboard manufacturers: to have valuable information to fabricate optimal performance surfboards; (ii) surfing community: to have very complete information about their surfing technique.
Strain gauges have been included to record the flex and torsion of the surfboard in real operation. One XSens MTi-G integrating gyroscopes, accelerometers, compass and GPS has been incorporated to record data about the surfboard accelerations, speed and movements. Pressure sensors have been installed on the surfboard deck to record data about the surfer's feet position. All the data is recorded in a flash memory stick through an IGEPv2 embedded computer.
After a surf session has been finished, data is transmitted over wifi to a PC. The software system to visualize and process the data has been developed in ROS.
Phase 1 of board construction and electronics performance validation has successfully finished. The exciting Phase 2 has started to do data analysis to find the keys for the mechanical behavior of surfboards and to improve surfer's surfing technique. Professional surfers Aritz Aranburu, Hodei Collazo, Kepa Acero and Mario Azurza have already tested the surfboard. Other professionals like Tiago Pires, Joan Duru, Tim Boal and Eneko Acero are waiting for their chance.
Footage of the visualization software:
CSIRO's Bobcat is a S185 skid-steerer, complete with lift arms. This heavy duty outdoor robot enables CSIRO robots to interact with an environment, rather than just move through it. In order to do this, they have equipped the bobcat with a variety of sensors, including two horizontal lasers, a spinning laser, camera, two IMUs, GPS, wheel encoders, and more. They also plan on integrating stereo, Velodyne, multi-modal radar, hyper spectral, and other sensors.
CSIRO's current focus with the bobcat is shared and cooperative autonomy. With shared autonomy, a human tele-operator can intervene and provide corrections as the bobcat performs a task. With cooperative autonomy, the bobcat can leverage robots with other capabilities. This sort of coordination could enable a fleet of bobcats to autonomously excavate an area.
CSIRO is in the process of migrating the Bobcat to ROS. The Bobcat was originally developed using DDX (Dynamic Data eXchange). DDX is a third generation middleware developed by CSIRO and provides features, like shared memory data exchange, that are complementary to ROS. They will continue using DDX for low-level realtime control, but sensor drivers and higher level code are being migrated to ROS. They are also investigating adding DDX-like transports to ROS.
I would like to announce the availability of a simple driver for the Neato Robotics XV-11 for ROS. The
neato_robot stack contains a
neato_driver (generic python based driver) and
neato_node package. The
neato_node subscribes to a standard
cmd_vel (geometry_msgs/Twist) topic to control the base, and publishes laser scans from the robot, as well as odometry. The
neato_slam package contains our current
move_base launch and configuration files (still needs some work).
I've uploaded two videos thus far showing the Neato:
I also have to announce our repository, since we've never officially done that: albany-ros-pkg.googlecode.com
I hope to have documentation for this new stack on the ROS wiki later today/tonight.
ILS Social Robotics Lab SUNY Albany
The RGB-D project, a joint research effort between Intel Labs Seattle and the University of Washington Department of Computer Science & Engineering, has lots of demo videos on their site showing the various ways in which they have been using the PrimeSense RGB-D sensors in their work. These demos include 3D modeling of indoor environments, object recognition, object modeling, and gesture-based interactions.
In the video above, the "Gambit" chess-playing robot uses the RGB-D sensor to monitor a physical chessboard and play against a human opponent. And yes, that is the ROS rviz visualizer in the background.
Robotics Engineering Excellence (re2, Inc.) is a research and development company that focuses on advanced mobile manipulation, including self-contained manipulators and payloads for mobile robot platforms. As a spin-out of Carnegie Mellon, they've developed plug-n-play modular manipulation technologies, a JAUS SDK, and unmanned ground vehicles (UGV). They focus on the defense industry and their clients include DARPA, the US Armed Forces (Army, Navy, Air Force), Robotics Technology Consortium, and TSWG. RE2 has recently adopted ROS as a platform to architect and organize code.
RE2 has several projects using ROS, including interchangeable end-effectors and force/tactile feedback for manipulators. Their Small Robot Toolkit (SRT) is a plug-n-play robot arm with interchangeable end-effector tools, which can be used as a manipulator payload for mobile platforms. RE2 has also developed the capability to automatically change out end-effectors, which is being used with a modular recon manipulator for vehicle-borne IEDs. Bomb technicians can switch between various tools, like drills, saws, and scope cameras, to inspect vehicles remotely. RE2 is also working on a force and tactile sensing manipulator, which provides haptic feedback for an operator. This sort of feedback makes it easier to perform tasks like inserting a key into a lock, or controlling a drill.
RE2's manipulation technologies are also being used on mobile platforms. They are developing a Robotic Nursing Assistant (RNA) to help nurses with difficult tasks, such as helping a patient sit up, and transferring a patient to a gurney. The RNA uses a mobile hospital platform with dexterous manipulators to create a capable tool for nurses to use. RE2 is also working on an autonomous robotic door opening kit for unmanned ground vehicles.
RE2's expertise in manipulation made them a natural choice to be the systems integrator for the software track of the DARPA ARM program. The goal of this track is to autonomously grasp and manipulate known objects using a common hardware platform. Participants will have to complete various challenges with this platform, like writing with a pen, sorting objects on a table, opening a gym bag, inserting a key in a lock, throwing a ball, using duct tape, and opening a jar. There will also be an outreach track that will provide web-based access. This will enable a community of students, hobbyists, and corporate teams to test their own skills at these challenges.
RE2 had it own set of challenges: build a robust and capable hardware and software platform for these participants to use. The ARM robot is a two-arm manipulator with sensor head. The hardware, valued at around half a million dollars, includes:
- Two Barrett WAM arms (7-DOF with force-torque sensors)
- Two Barrett Hands (three-finger, tactile sensors on tips and palm)
- Sensor head
- Swiss Ranger 4000 (176x144 at 54fps)
- Bumblebee 2 (648x488 at 48fps)
- Color camera (5MP, 45 deg FOV)
- Stereo microphones (44kHz, 16-bit)
- Pan-tilt neck (4-DOF, dual pan-tilt)
A future version of the robot will incorporate a mobile base.
The software platform on the ARM robot is built on top of ROS. ROS was selected by RE2 for its modularity and tools. The modularity was important as the DARPA ARM project features an outreach program that will be providing a simulator. Users can switch between using the simulated and real robot with no changes to their code. The ARM platform also takes advantage of core ROS tools like rostest for testing and rosbag for data logging.
ROS has already proven itself on the similar CMU HERB robot, which has two Barrett arms and a mobile base. The various participants, including those in the outreach track, will be able to take advantage of the many ROS libraries for perception, grasping, and manipulation. This includes open-source frameworks like OpenRAVE, which was used on HERB for grasping and manipulation tasks.
Skybotix put out a video of their CoaX helicopter running with ROS teloperation:
You can read more at I Heart Robotics.
Taylor Veltrop has announced veltrop-ros-pkg as well as tools for Roboard-based humanoids
I am pleased to announce the Veltrop ROS Repository!
If any of you out there are using using small servo based robots, especially humanoid, then then check this out!
The Veltrop ROS Repository leverages ROS to get hobbyists and researchers quickly up and running with the Roboard operating a humanoid robot.
The Roboard is a small 1Ghz 486 platform that has built in PWM control, and many IO ports:
The repository consists of a stack suitable for the Roboard, and another stack specialized for small joint based robots.
The hobby community seems to be reinventing the wheel with each person that combines an embedded PC with one of these humanoid robots. When the beginner tries to do this it's too daunting, and for others it is very time consuming. So I hope to alleviate this, and get some help back too.
Here's a summary of some of the features:
- Pose the robot based on definitions in an XML file
- Execute motions by running a series of timed poses (XML)
- Stabilization via gyro data
- Definition of a KHR style robot linkage for 3D virtual modeling and servo control (URDF)
- Calibrate trim of robot with GUI
- Calibrate gyro stabilization with GUI
- Import poses and trim (not motions) from Kondo's Heart2Heart RCB files
- Control robot remotely over network with keyboard
- Control robot with PS3 controller over bluetooth
- Support for HMC6343 compass/tilt sensor
- Support for Kondo gyro sensors
- Stereo video capture and processing into point cloud
- CPU heavy tasks (such as stereo processing) can be executed on remote computer
- Controls Kondo PWM servos
Here's some missing parts (maybe others would like to contribute here?)
- Control Kondo serial servos
- GUI for editing and running poses/motions
- Tool to capture poses
- More sophisticated motion scripting
- GUI for calibration of A/D inputs
My next goals for this project are to incorporate navigation, and arm/gripper trajectory planning.
The documentation is here: http://taylor.veltrop.com/robotics/khrhumanoidv2.php?topic=veltrop-ros-pkg There's a lot of other relevant information to the robot throughout the site.
The repository is hosted on sourceforge: http://sourceforge.net/projects/veltrop-ros-pkg
I hope someone out there has a chance to try this out and contribute!
The Autonomous Systems Lab (ASL) at ETH Zurich is interested in all kinds of robots, provided that they are autonomous and operate in the real world. From mobile robots to micro aerial vehicles to boats to space rovers, they have a huge family of robots, many of which are already using ROS.
As ASL is historically a mechanical lab, their focus has been on hardware rather than software. ROS provides them a large community of software to draw from so that they can maintain this focus. Similarly, they run their own open-source software hosting service, ASLforge, which promotes the sharing of ASL software with the rest of the robotics community. Integrating with ROS allows them to more easily share code between labs and contribute to the growing ROS community.
The list of robots that they already have integrated with ROS is impressive, especially in its diversity:
- Rezero: Rezero is a ballbot, i.e. a robot that balances and drives on a single sphere.
- Magnebike: Magnebike is a compact, magnetic-wheeled inspection robot. Magnebike is designed to work on both flat and curved surfaces so that it can work inside of metal pipes with complex arrangement. A rotating Hokuyo scanner enables them to do research on localization in these complex 3D environments.
- Robox: Robox is a mobile robot designed for tour guide applications.
- Crab: Crab is a space rover designed for navigation in rough outdoor terrain.
- sFly: The goal of the sFly project is to develop small micro helicopters capable of safely and autonomously navigating city-like environments. They currently have a family of AscTec quadrotors.
- Limnobotics: The Limnobotics project has developed an autonomous boat that is designed to perform scientific measurements on Lake Zurich.
- Hyraii: Hyraii is a hydrofoil-based sailboat.
That's not all! Stéphane Magnenat of ASL has contributed a bridge between ROS and the ASEBA framework. This has enabled integration of ROS with many more robots, including the marXbot, handbot, smartrob, and e-puck. ASL also has a Pioneer mobile robot using ROS, and their spinout, Skybotix, develops a coax helicopter that is integrated with ROS. Not all of ASL's robots are using ROS yet, but there is a chance that we will soon see ROS on their walking robot, autonomous car, and AUV.
ASL has created an ASLForge project to provide ROS drivers for Crab, and they will be working over the next several months to select more general and high-quality libraries to release to the ROS community.
ASL's family of robots is impressive, as is their commitment to ROS. They are single-handedly expanding the ROS community in a variety of new directions and we can't wait to see what's next.
Nate Roney from the Mobile Robotics Lab at SIUE has announced drivers for the Parrot AR.Drone, as well as the siue-ros-pkg repository
I'd like to share a project I've been working on with the ROS community.
Some may be familiar with the Parrot AR.Drone: an inexpensive quadrotor helicopter that came out in September. My lab got one, but I was pretty disappointed that it didn't have ROS support out of the box. It does have potential, though, with 2 cameras and a full IMU, so it seemed like a worthwhile endeavor to create a ROS interface for it.
So, I would like to announce the first public release of the ROS interface for the AR.Drone. Currently, it allows control of the AR.Drone using a
geometry_msgs/Twist message, and I'm working on getting the video feed, IMU data and other relevant state information published as well. Unfortunately, the documentation on how the Drone transmits it's state information is a bit sparse, so getting at the video (anyone with experience converting H.263 to a
sensor_msgs/Image, get in touch!) and IMU data are taking more time than I'd hoped, but it's coming along. Keep an eye on the ardrone stack, it will be updated as new features are added.
For now, anyone hoping to control their AR.Drone using ROS, this is the package for you! Either send a Twist from your own code, or use the included
ardrone_teleop package for manual control.
You can find the
ardrone_teleop packages on the experimental-ardrone branch of siue-ros-pkg, which itself never had a proper public release. This repository represents the Mobile Robotics Lab at SIUE, and contains a few utility nodes I have developed for some of our past projects, with more packages staged for addition to the repository once we have time to document them properly for a formal release.
I'm hopeful that someone will find some of this useful. Feel free to contact me with any questions!
Thanks to your quick and precise answers, I have programmed a bridge between ASEBA and ROS:
This bridge allows to load source code, inspect the network structure, read and write variables, and send and receive events from ROS.
This brings ROS to the following platforms:
- Mobots' marxbot, handbot and smartrob
The Humanoid Robots Lab at the University of Freiburg is using the Aldebaran Nao robot to do a variety of research, from climbing stairs, to imitating human motions, to footstep planning. One of their Naos, nicknamed "Osiris", has a special modification: a Hokuyo laser rangefinder head. This modification enables their research on localization for humanoid robots in complex environments.
Localization on humanoid robots is much more difficult due to the shaking motion of the robot while moving. Using techniques that will be outlined in an upcoming IROS paper , they are able to do 6D localization of the Nao's torso based on laser, odometry, IMU, and proprioception data. In the video above, you can see Osiris localizing itself while walking and climbing stairs.
The researchers at Uni Freiburg have been long-time contributors to ROS and run their own alufr-ros-pkg open source repository, which contains libraries for articulation models, 3d occupancy grids (OctoMap), and a Nao stack that builds on Brown's Nao driver to provide additional ROS integration.
Uni Freiburg hopes to build on their research with humanoids to work towards a full navigation stack for humanoids. This will include a footstep planning library, which they will be releasing in alufr-ros-pkg soon. Below are some screenshots of their 3D scans and footstep plans in rviz.
 "Humanoid Robot Localization in Complex Indoor Environments" by Armin Hornung, Kai M. Wurm, and Maren Bennewitz (to be presented at IROS 2010).
Previously: Robots Using ROS: Aldebaran Nao
We first covered Takashi Ogura's (aka OTL) robot projects back in March when he got the ROS PS3 joystick driver working with an i-Sobot. He has many more fun projects that are too numerous to cover: White Bear Robot (Roomba + Navigation stack), Arduino board for the i-Sobot, Twitter control for humanoid robot, and an all-time classic, humanoid robot with iPhone 3GS head.
Along the way, OTL has been putting together tutorials and previews of ROS libraries for his Japanese audience on ros-robot.blogspot.com, such as a Japanese speech node, Twitter for ROS using OAuth, URDF tutorial, Euslisp demos, and many more.
Many of those tutorials and projects came together in the video above: Kitemas LV1. Kitemas LV1 is a fun drink ordering robot that lets you order a drink and then pours it for you. Judging from previous posts, it looks like Kitemas is using a Roomba with Hokuyo laser range finder for autonomous navigation, as well as a USB web camera. Drink selection can be done either through colored coasters or a Twitter API, and the robot can be driven manually with a PS3 joystick.
Here's a software diagram that shows the various ROS nodes working together:
OTL has also created otl-ros-pkg, so readers of his blog can get code samples for his various tutorials and even see code for robots like Kitemas above. You can watch a video with a more dressed up version of Kitemas LV1 here.
PIXHAWK is an open-source framework and middleware for micro air vehicles (MAVs) that focuses on computer vision. The framework is being developed by students at ETH Zurich, and they recently won second place the EMAV 2009 Indoor Autonomy Competition. The PIXHAWK software runs on several MAVs, including the PIXHAWK Cheetah Quadrotor and the Pioneer Coax Helicopter. The Cheetah Quadrotor was demoed at ECCV 2010 demonstrating stable autonomous flight using onboard computer vision and some interaction using ball tracking. A parts lists and assembly instructions for the Cheetah are available on the PIXHAWK web site.
The PIXHAWK middleware, MAVLink, runs on top of MIT's LCM middleware system and the PIXHAWK team has also integrated their system with ROS to provide access to tools like rviz. With rviz, PIXHAWK users can visualize a variety of 3D data from the MAVs, including pose estimates from the computer vision algorithms, as well as waypoints and IMU measurements. Other ROS processes can easily be interfaced with a PIXHAWK system.
The PIXHAWK team has also made their own open-source contributions to visualization tools for MAVs. Their QGroundControl mission planning tool provides a variety of visualizations, including real-time plotting of telemetry data. It was was initially developed for PIXHAWK-based systems, but now open to the whole MAV community.
The rest of the PIXHAWK software, including computer vision framework and flight controller software, is also available as open source. You can checkout their winter 2010 roadmap, which includes release of their ARTK hovering code base with ROS support.
The PIXHAWK team is also taking orders for a batch production run of their pxIMU Autopilot and Inertial Measurement Unit Board ($399). It provides a compact, integrated solution for those building their own quadrotors. The firmware is open source and compatible with the PIXHAWK software like QGroundControl.
We've previously featured Penn's AscTec quadrotors doing aggressive maneuvers; now you can see them out and about doing "Autonomous Multi-Floor Indoor Navigation with a Computationally Constrained MAV":
All of the computation is done onboard the 1.6Ghz Intel Atom processor and uses ROS for interprocess communication.
Update: the GRASP lab also has the quadrotors running through thrown hoops:
Above: Meka bimanual robot using Meka A2 compliant arm and H2 compliant hand
Meka builds a wide-range of robot hardware targeted at mobile manipulation research in human environments. Meka's work was previously featured in the post on the mobile manipulator Cody from Georgia Tech, which uses Meka arms and torso.
Meka was started by Aaron Edsinger and Jeff Weber to capitalize on their experience building robots like Domo, which featured force-controlled arms, hands, and neck built out of series-elastic actuators. Meka's expertise with series-elastic actuators allows them to target their hardware at human-centered applications, where compact, lightweight, compliant, force-controlled hardware is desired. Georgia Tech's HRI robot Simon, which uses Meka torso, head, arms, and hands, has proportions similar to a 5'7" female.
Meka initially built robot hands and arms, but is now transitioning into building all the components you need for a mobile manipulation platform. As Meka began to make this transition, they also started to transition to ROS. As a small startup company, they didn't have the resources to design and build the software drivers and libraries for a more complete mobile manipulation platform. They were also transitioning from a single real-time computer to using multiple computers, and they needed a middleware platform that would help them utilize this increased power.
One of Meka's new hardware products is the B1 Omni Base, which is getting close to completion. The B1 is based on the Nomadic XR4000 design and uses Holomni's powered casters. It is also integrated with the M3 realtime system and will have velocity, pose, and operational-space control available. The base houses a RTAI Ubuntu computer and can have up to two additional computers.
Meka is also designing two sensor heads that will be 100% integrated with ROS. The more fully-featured of the two will have five cameras, including Videre stereo, as well as a laser range finder, microphone array, and IMU. The tilting action of the head will enable to robot to use the laser rangefinder as a 3D sensor, in addition to the stereo.
The Meka software system consists of the Meka M3 control system coupled with ROS and other open-source libraries like Orocos' KDL. M3 is used to manage the realtime system and provide low-level GUI tools. ROS is used to provide visualizations and higher-level APIs to the hardware, such as motion planners that incorporate obstacle avoidance. ROS is also being used to integrate the two sensor heads that Meka has in development, as well as provide a larger set of hardware drivers so that customers can more easily integrate new hardware.
ROS is fully available with Meka's robots starting with last month's M3 v1.1 release. For lots of photos and video of Meka's hardware in action, see this Hizook post.
The CoaX helicopter is a micro UAV targeted at the research and educational markets. The small 320g helicopter includes an IMU, a downward-looking and three optional sideward-looking sonars, pressure sensor, color camera, and Bluetooth, XBee, or WiFi communication. In addition to two DSPs (dsPIC33), the CoaX has an optional Gumstix Overo computer that can run ROS. You can see more of the specs on their hardware wiki page.
Skybotix fully supports open source with the CoaX. The CoaX API, including low-level firmware and controller, is available open source under a GNU LGPL license. Their Gumstix Overo setup comes with a basic ROS installation. They include a ROS publisher for the CoaX state, a demo application for transmitting video data, and a GUI for visualizing both. Although the CoaX comes with minimal additional ROS libraries, there is a growing community of micro-UAV developers using ROS, including the micro-UAV-focused ccny-ros-pkg repository.
The CoaX was developed in collaboration with ETH Zurich. The Skybotix Youtube channel has videos of ETH Zurich student projects. Skybotix released recently a speed module for CoaX based on optical sensor, which enables indoor speed control as well as indoor hovering (video).
DARPA is having a contest to name their new robot for the ARM program. "The ARM Robot" has two Barrett WAM arms, BarrettHands, 6-axis force torque sensors at the wrist, and pan-tilt head. For sensors, it has a color camera, SwissRanger depth camera, stereo camera, and microphone.
The final software architecture and APIs have not been released yet, but the FAQ notes:
The software architecture is TBD, but is leaning toward a nodal software architecture using a tool such as Robotic Operating System (ROS).
The software track for the ARM program currently includes Carnegie Mellon University, HRL Laboratories, iRobot, NASA-Jet Propulsion Laboratory, SRI International and University of Southern California. It would certainly be a great boost for the ROS community to have more common platforms to develop and share the latest perception and manipulation techniques.
The CityFlyer project at the CCNY Robotics and Intelligent Systems Lab is using Ascending Technologies Pelican and Hummingbird Quadrotor helicopters to do research in 3D mapping and navigation. The Ascending Technologies platform provides a 1.6Ghz Intel Atom processor, 500 gram payload, GPS, and barometric altimeter. The CityFlyer add several sensors, including a Hokuyo URG-04LX and IMU. The Hokuyo URG has been modified to double as a laser height estimator. The CityFlyer project is able to combine data from these sensors to do indoor SLAM using GMapping.
The CityFlyer project has also created an RGB-D sensor by combining data from a SwissRanger 4000 and Logitech Webcam. They use this to build 3D maps for indoor environments using a 3D Multi-Volume Occupancy Grid (MVOG). Their MVOG technique is described in their RGB-D 2010 paper and more videos are here and here. Although the full sensor package exceeds the payload of the quadrotor, they anticipate that advances in RGB-D will make these techniques feasible for micro UAVs.
CCNY has released a variety of drivers, libraries and tools to support the ROS community. These include drivers and tools for the AscTec platform, libraries for dealing with aerially mounted laser rangefinders, a New College Dataset parser, and libraries for using AR tags with ROS.
CCNY has also developed a "Ground Station" application that acts as a virtual cockpit for visualizing telemetry data from an AscTec quadrotor. It is also able to overlay GPS data on an outdoor map to visualize the UAV's tracks. I Heart Robotics has a great writeup on Ground Station, and you can also checkout the documentation on ROS.org.
The ccny-ros-pkg is an excellent resource for the ROS community with complete documentation on a variety of packages, including videos that demonstrate these packages in use.
Bag files for the video above can be downloaded here (
Qbo is a personal, open-source robot being developed by Thecorpora. Francisco Paz started the Qbo project five years ago to address the need for a low cost, open-source robot to enable the ordinary consumer to enter the robotics and the artificial intelligence world.
A couple months ago, Thecorpora decided to switch their software development to ROS and have now acheived "99.9%" integration. You can watch the video below of Qbo's head servos being controlled by the ROS Wiimote drivers, as well as this video of the Wiimote controlling Qbo's wheels. Their use of the ROS joystick drivers means that any of the supported joysticks can be used with Qbo, including the PS3 joystick and generic Linux joysticks.
Qbo's many other sensors are also integrated with ROS, which means that they can be used with higher-level ROS libraries. This includes the four ultrasonic sensors as well as Qbo's stereo webcams. They have already integrated the stereo and odometry data with OpenCV in order to provide SLAM capabilities (described below).
It's really exciting to see an open-source robot building and extending upon ROS. From their latest status update, it sounds like things are getting close to done, including a nice GUI that lets even novice users interact with the robot.
Qbo SLAM algorithm:
The algorithm can be divided into three different parts:
The first task is to calculate the movement of the robot. To do that we use the driver for our robot that sends an Odometry message.
The second task is to detect natural features in the images and estimate their positions in a three dimensional space. The algorithm used to detect the features is the GoodFeaturesToTrackDetector function from OpenCV. Then we extract SURF descriptors of those features and match them with the BruteForceMatcher algorithm, also from OpenCV.
We also track the points matched with the sparse iterative version of the Lucas-Kanade optical flow in pyramids and avoid looking for new features in places where we are already tracking another feature.
We take the images to this node from image messages synchronized and send a PointCloud message with the position of the features, their covariance in the three coordinates, and the SURF descriptor of the features.
The third task is to implement an Extended Kalman Filter and a data association algorithm based in the mahalanobis distance from the CloudPoint seen from the robot and the CloudPoint of the map. To do that we read the Odometry and PointCloud messages and we send also an Odometry message and a PointCloud message with the position of the robot and the features included in the map as an output.
Lego Mindstorms NXT is a low-cost programmable robotics kit that is used in education and by hobbyits throughout the world. One of the most visible NXT events is First Lego League. The developers of foote-ros-pkg have developed a bridge that connects NXT with ROS, allowing NXT users to leverage all the ROS tools and capabilities.
The NXT-ROS software stack provides many useful tools to interface NXT robots with ROS. Currently NXT users can take robot models created with Lego Digital Designer, and automatically convert them into robot models compatible with ROS. The converted robot model can be visualized in rviz, and in the future we hope to add simulation capabilities in Gazebo, our 3D simulator. The bridge between NXT and ROS creates a ROS topic for each motor and sensor of the NXT robot.
Once a robot is connected to ROS, you can start running applications such as the base controller, wheel odometry, keyboard/joystick teleoperation, and even assisted teleoperation using the ROS navigation stack. The NXT-ROS software stack includes a number of example robot models for users to play with and to get a feel for using NXT with ROS.
This new NXT-ROS software stack provides NXT users access to the open-source ROS community. NXT users now have access to state of the art open source robotics libraries available on ros.org.
The folks at the ModLab/GRASP Lab at Penn recently got their PR2 and used the occassion to test out "Mini-PR2". They used $5000 worth of CKBot modules to replicate the degrees of freedom of the real PR2 -- all except the torso. They used 18 modules (14 U-Bar, 4 L7, 4 motor) to create Mini-PR2, and they also added a counter-balance on the shoulder to help balance the arm.
The CKBot modules, which have previously been featured here, enable their lab to try out new ideas quickly and cheaply. In this case, they can use the PR2 simulator to drive their real robot, and they've used an actual PR2 to puppet Mini-PR2 (see 0:49 in video). They are now working on using the Mini-PR2 to puppet the actual PR2.
The CKBot modules don't have the computation power to run ROS on their own, but they can communicate with another computer that translates between the two systems. Their current system listens to the
joint_states topic on the PR2 and translates those messages into CKBot joint angles.
You can now use Shadow Robot hardware with ROS! Engineers at Shadow Robot have been busy building a ROS stack and have now reached their first release. This initial release includes an interface to both simulated and real hardware, which means that, whether or not you have a Shadow Dextrous Hand of your own, you can use your ROS software to see the Shadow hand move inside of ROS tools like rviz.
Robotino is a commercially available mobile robot from Festo Didactic. It's used for both education and research, including competitions like RoboCup. It features an omnidirectional base, bumps sensors, infrared distance sensors, and a color VGA camera. The design of Robotino is modular, and it can easily be equipped with a variety of accessories, inluding sensors like laser scanners, gyroscopes, and the Northstar indoor positioning system.
REC has been supportive of the Openrobotino community, which provides open-source software for use with the Robotino, and now, they are providing official ROS drivers in the robotino_drivers stack. Their current ROS integration already supports the ROS navigation stack, and you can watch the video below that shows the Robotino being controlled inside of rviz.
We're very excited to see commercially available robot hardware platforms being support with official ROS drivers. There are over a thousand Robotino systems around the world and we hope that these drivers will help connect the Robotino and ROS communities.
ROS has taken to the air! In a video that's quickly making the rounds on the Internet, you can see quadrotors from Penn's GRASP Lab performing all sorts of "aggressive" acrobatic stunts, from flying through narrow windows to landing on vertical perches. The entire system uses a mix of high-level ROS software for modularization and communication, as well as low-level microcontroller code.
The goal of this project was to fly a quadrotor precisely along aggressive trajectories. The basic components of the system are the quadrotor, a control laptop, and the Vicon motion capture system. The onboard microcontroller runs an attitude control loop at 1 kHz. The control laptop runs the higher-level position control loop. The control computer communicates with the quadrotor via an XBee link.
Communication between different programs on the control computer is done through ROS. A motion-capture node sends pose messages to a central controller, which in turn outputs control messages to code that sends the commands to the quadrotor. Experimentation was performed in a 3D simulator using a quadrotor model that contains a very accurate description of the dynamics of the actual quadrotor. The simulator communicates through ROS in a similar way as the hardware does, allowing for minimal overhead to switch between experimentation in simulation and on the actual quadrotor. ROS made it easy to modularize the code and write programs for each aspect of the entire problem independently.
Thanks to Daniel Mellinger of Penn for helping to put together this post.
The Media and Machines Lab at Washington University in St. Louis has integrated several of their robots with ROS, including an iRobot B21r and several Videre ERRATICs. They are also maintaining wu-ros-pkg, which is a repository of research projects, drivers, and utilities related to these robots.
Wash U.'s B21r, known as Lewis, is best known for being a mobile robot photographer. Lewis is currently being used for HRI research, and they are also reimplementing the photographer functionality in ROS. Lewis is fully integrated with ROS, including sensor data from 48 sonar sensors, 56 bump sensors, 2 webcams, and a Hokuyo laser rangerfinder. There is also Directed Perception PTU-46 pan-tilt unit that they have mounted the webcams on (driver).
The B21r community will be happy to know that Wash U. has deeply integrated this platform with ROS. They have created an urdf model, complete with meshes for visualizing in rviz, and they have also integrated the B21r with the ROS navigation stack. They are also providing an rwi stack, which includes their rflex driver. The rflex driver is capable of driving other iRobot/RWI robot platforms, including the B18, ATRV, and Magellan Pro.
Wash U. has also integrated their four Videre ERRATICs with ROS. They've named these robots Blood, Sweat, Toil, and Tears, and have equipped them with Hokuyo laser rangerfinders and webcams. The ERRATICs enable them to explore research in multi-robot coordination and control. They're also developing on iRobot Creates using drivers from brown-ros-pkg.
The research at the Media and Machines Lab has led to several interfaces and visualizations for using robots. This includes RIDE (Robot Interactive Display Environment), which takes cues from Real Time Strategy (RTS) video games to provide an interface for easily controlling multiple robots simultaneously. They have also developed a visualization for mapping sensor data over time for search tasks and a 3D interface for binocular robots. RIDE is available in the ride stack, and much of their other research will soon be released in wu-ros-pkg.
HERB (Home Exploring Robotic Butler) is a mobile manipulation platform built by Intel Research Pittsburgh, in collaboration with the Robotics Institute at Carnegie Mellon University. HERB is designed to be a "robotic butler" and has been demonstrated in a variety of real-world kitchen tasks, such as opening refrigerator and cabinet doors, finding and collecting coffee mugs, and throwing away trash. HERB is powered by a variety of open-source libraries, including several developed by CMU researchers, like OpenRAVE and GATMO.
OpenRAVE is a software platform for robotics that was designed specifically for the challenges related to motion planning. It was created in 2006 by Rosen Diankov, and in late 2008 he integrated it with ROS. The benefits of this integration can be seen on HERB.
HERB has a Barrett WAM arm, a pair of low-power onboard computers, Pointgrey Flea and Dragonfly cameras, a SICK LMS lidar, a rotating Hokuyo lidar, and a Logitech 9000 webcam, all of which sit on a Segway RMP200 base. HERB communicates with off-board PCs over a wireless network.
ROS is glue for this setup: ROS is used for the hardware drivers, process management, and communication on HERB. ROS' ability to distribute processes across computers is used to help perform computation off the robot.
OpenRAVE provides an environment on top of this that unifies the controls and sensors for doing motion-planning algorithms, including sending trajectories to the arm and hand. OpenRAVE implements Diankov et. al's work on caging grasps, which enables HERB to perform tasks like opening and closing doors, drawers, cabinets, and turning handles.
In addition to manipulating objects, HERB has to be able to keep track of people and other movable objects that exist in real-world environments. HERB uses the GATMO (Generalized Approach to Tracking Movable Objects) library to track these movable objects. GATMO was developed by Garratt Gallagher and is available from gatmo.org. The GATMO library includes packaging and installation instructions for ROS.
The collaboration between CMU and Intel Labs Pittsburgh has produced numerous other libraries that have found their way into ROS. Rosen Diankov started the cmu-ros-pkg repository, which houses many of these libraries, and he also wrote rosoct, an Octave client library for ROS. Another library of note is the chomp_motion_planner package, which was implemented by Mrinal Kalakrishnan based on the work of Ratliff et. al.
The Intelligent Autonomous Systems Group at TU München (TUM) built TUM-Rosie with the goal of developing a robotics system with a high-degree of cognition. This goal is driving research in 3D perception, cognitive control, knowledge processing, and highlevel planning. TUM is building their research on TUM-Rosie using ROS and has setup the open-source tum-ros-pkg repository to share their research, libraries, and hardware drivers. TUM has already released a variety of ROS packages and is in the process of releasing more.
TUM-Rosie is a mobile manipulator built on a Kuka mecanum-wheeled omnidrive base, with two Kuka LWR-4 arms and DLR-HIT hands. It has a variety of sensors for accomplishing perception tasks, including a SwissRanger 4000, FLIR thermal camera, Videre stereo camera, SVS-VISTEK eco274 RGB cameras, a tilting "2.5D" Hokuyo UTM-30LX lidar, and both front and rear Hokuyo URG-04LX lidars.
One of the new libraries that TUM is developing is the cloud_algos package for 3D perception of point cloud data. cloud_algos is being designed as an extension of the pcl (Point Cloud Library) package. The cloud_algos package consists of a set of point-cloud-processing algorithms, such as a rotational object estimator. The rotational object estimator enables a robot to create models for objects like pitchers and boxes from incomplete point cloud data. TUM has already released several packages for semantic mapping and cognitive perception.
TUM is also working on systems that combine knowledge reasoning with perception. The K-COPMAN (Knowledge-enabled Cognitive Perception for Manipulation) system in the knowledge stack generates symbolic representations of perceived objects. This symbolic representation allows a robot to make inferences about what is seen, like what items are missing from a breakfast table.
In the field of knowledge processing and reasoning for personal robots, TUM developed the KnowRob system that can provide:
- spatial knowledge about the world, e.g. the positions of obstacles
- ontological knowledge about objects, their types, relations, and properties
- common-sense knowledge, for instance, that objects inside a cupboard are not visible from outside unless the door is open
- knowledge about the functions of objects like the main task a tool serves for or the sequence of actions required to operate a dishwasher
KnowRob is part of the tum-ros-pkg repository, and there is a wiki with documentation and tutorials.
At the high level, TUM is working on CRAM (Cognitive Robot Abstraction Machine), which provides a language for programming cognitive control systems. The goal of CRAM is to allow autonomous robots to infer decisions, rather than just having pre-programmed decisions. Practically, the approach will enable tackling of the complete pick-and-place housework cycle, which includes setting the table, cleaning the table as well as loading the dishwasher, unloading it and returning the items to their storage locations. CRAM features showcased in this scenario include the probabilistic inference of what items should be placed where on the table, what items are missing, where items can be found, which items can and need to be cleaned in the dishwasher, etc. As robots become more capable, it will be much more difficult to explicitly program all of their decisions in advance, and the TUM researchers hope that CRAM will help drive AI-based robotics.
Researchers at TUM have also made a variety of contributions to the core ROS system, including many features for the roslisp client library. They are also maintaining research datasets for the community, including a kitchen dataset and a semantic database of 3d objects, and they have contributed to a variety of other open-source robotics systems, like YARP and Player/Stage.
Research on the TUM-Rosie robot has been enabled by the Cluster of Excellence CoTeSys (Cognition for Technical Systems). For more information:
- TUM-Rosie hardware and software description
- IAS Video Channel
- tum-ros-pkg on ROS.org
- Overview Article: Towards Performing Everyday Manipulation Activities
- Overview Article: Towards Automated Models of Activities of Daily Life
- Overview Article: Generality and Legibility in Mobile Manipulation
The Modlab at Penn designed the CKBot (Connector Kinetic roBot) module to be fast, small, and inexpensive. These qualities enable it to be used to explore the promise of modular robotics systems, including adaptability, reconfigurability, and fault tolerance. They've researched dynamic rolling gaits, which use a loop configuration to achieve speeds of up to 1.6/ms, as well as bouncing gaits by attaching passive legs. They are also using the CKBots to research the difficult problem of configuration recognition, and, for the Terminator 2 fans, they have even demonstrated "Self re-Assembly after Explosion" (SAE).
More recently, Modlab has developed ROS packages that can be used when the CKBots are connected to a separate ROS system. They have also created an open source repository, modlab-ros-pkg, for CKBot ROS users. The CKBot modules only have a few PIC processors -- not enough to run ROS -- so an off-board system enables them to use algorithms that require more processing power. In one experiment, they used a camera to locate AR tags on the CKBot modules. The locations were stored in tf, which was used to calculate coordinate transforms between modules. They have also used rviz to display the estimated position of modules during SAE when AR tags were not in use.
One of the projects Modlab is currently working on is a "mini-PR2" made out of CKBot modules. The mini-PR2 will be kinematically similar to the Willow Garage PR2 and is powered by a separate laptop. You can see an early prototype of mini-PR2 opening an Odwalla fridge:
CKbots trace their ancestry back to Professor Mark Yim's work on the PolyBot system at PARC. The PolyBot system had an impressive range of demonstrations, including fence and stair climbing, tricycle riding, and even transforming from a loop, to a snake, to a spider.
I Heart Robotics has released a rovio stack for ROS, which contains a controller, a joystick teleop node, and associated launch files for the WowWee Rovio. There are also instructions and configuration for using the probe package from brown-ros-pkg to connect to Rovio's camera.
You can download the rovio stack from iheart-ros-pkg:
As the announcement notes, this is still a work in progress, but this release should help other Rovio hackers participate in adding new capabilities.
Marvin is an autonomous car from Austin Robot Technology and the Department of Computer Science at The University of Texas at Austin. The modified 1999 Isuzu VehiCross competed in the 2007 DARPA Urban Challenge and was able to complete many of the difficult tasks presented to the vehicles, including merging, U-turns, intersections, and parking.
The team members for Marvin have a long history of contributing to open-source robotics software, including the Player project. Recently, Marvin team members have been porting their software to ROS. As part of this effort, they have setup the utexas-art-ros-pkg open-source code repository, which provides drivers and higher-level libraries for autonomous vehicles.
Like many Urban Challenge vehicles, Marvin has a Velodyne HDL lidar and Applanix Position and Orientation System for Land Vehicles (POS-LV). Drivers for both of these are available in the utexas-art-ros-pkg applanix package and velodyne stack, respectively. The velodyne stack also includes libraries for detecting obstacles and drive-able terrain, as well as tools for visualizing in rviz.
Professor Peter Stone's group in the Department of Computer Science has been using Marvin to do multiagent research. You can learn about the algorithms used in the Urban Challenge in their paper, "Multiagent Interactions in Urban Driving". More recently, they have been doing research in "autonomous intersection management". This research is investigating a multiagent framework that can handle intersections for autonomous vehicles safely and efficiently. As you can see in the video above, these intersections for autonomous vehicles can handle far more vehicles than intersections designed for human-driven vehicles. For more information, you can watch a longer clip and read Kurt Dresner and Peter Stone's paper, "A Multiagent Approach to Autonomous Intersection Management"
Many people have contributed to the development of Marvin in the past. Current software development, including porting to ROS, is being led by Jack O'Quin and Dr. Michael Quinlan under the supervision of Professor Peter Stone.
Bosch's Research and Technology Center (RTC) has a Segway-RMP based robot that they have been using with ROS for the past year to do exploration, 3D mapping, and telepresence research. They recently released version 0.1 of their exploration stack in the bosch-ros-pkg repository, which integrates with the ROS navigation stack to provide 2D-exploration capabilities. You can use the bosch_demos stack to try this capability in simulation.
- 1 Mac Mini
- 2 SICK scanners
- 1 Nikon D90
- 1 SCHUNK/Amtec Powercube pan-tilt unit
- 1 touch screen monitor
- 1 Logitech webcam
- 1 Bosch gyro
- 1 Bosch 3-axis acceleromoter
Like most research robots, it's frequently reconfigured: they added an additional Mac mini, Flea camera, and Videre stereo camera for some recent work with visual localization.
Bosch RTC has been releasing drivers and libraries in the bosch-ros-pkg repository. They will be presenting their approach for mapping and texture reconstruction at ICRA 2010 and hope to release the code for that as well. This approach constructs a 3D environment using the laser data, fits a surface to the resulting model, and then maps camera data onto the surfaces.
Researchers at Bosch RTC were early contributors to ROS, which is remarkable as bosch-ros-pkg is the first time Bosch has ever contributed to an open source project. They have also been involved with the ros-pkg repository to improve the SLAM capabilities that are included with ROS Box Turtle, and they have been providing improvements to a visual odometry library that is currently in the works.
The Healthcare Robotics Lab focuses on robotic manipulation and human-robot interaction to research improvements in healthcare. Researchers at HRL have been using ROS on EL-E and Cody, two of their assistive robots. They have also been publishing their source code at gt-ros-pkg.
HRL first started using ROS on EL-E for their work on Physical, Perceptual, and Sematic (PPS) tags (paper). EL-E has a variety of sensors and Katana arm mounted on a Videre ERRATIC mobile robot base. The video below shows off many of EL-E's capabilities, including a laser pointer interface -- people select objects in the real-world for the robot to interact with using a laser pointer.
HRL does much of their research work in Python, so you will find Python-friendly wrappers for much of EL-E's hardware, including the Hokuyo UTM laser rangefinder, Thing Magic M5e RFID antenna, and Zenither linear actuator. You can also get CAD diagrams and source code for building your own tilting Hokuyo 3D scanner.
HRL also has a new robot, Cody, which you can see in the video below:
Update: you can read more on Cody at Hizook.
The end effector and controller are described in the paper, "Pulling Open Novel Doors and Drawers with Equilibrium Point Control" (Humanoids 2009). They've also published the CAD models of the end effector and the source code can be found in the 2009_humanoids_epc_pull ROS package.
Whether it's providing open source drivers for commonly used hardware, CAD models of their experimental hardware, or source code to accompany their papers, HRL has embraced openness with their research. For more information:
The Kawada HRP-2V is a variant of the HRP-2 "Promet" robot. It uses the torso, arms, and sensor head of the HRP-2, but it is mounted to an omni-directional mobile base instead of the usual humanoid legs. The JSK Lab at Tokyo University uses this platform for hardware and software research.
In May of 2009 at the ICRA conference, the HRP-2V was quickly integrated with the ROS navigation stack as a collaboration between JSK and Willow Garage. Previously, JSK had spent two weeks at Willow Garage integrating their software with ROS and the PR2. ICRA 2009 was held in Kobe, Japan, and Willow Garage had a booth. With laptops and the HRP-2V setup next to the booth, JSK and Willow Garage went to work getting the navigation stack on the HRP-2V. By the end of the conference, the HRP-2V was building maps and navigating the exhibition hall.
Like the Aldebaran Nao, the "Prairie Dog" platform from the Correll Lab at Colorado University is an example of the ROS community building on each others' results, and the best part is that you can build your own.
Prairie Dog is an integrated teaching and research platform built on top of an iRobot Create. It's used in the Multi-Robot Systems course at Colorado University, which teaches core topics like locomotion, kinematics, sensing, and localization, as well as multi-robot issues like coordination. The source code for Prairie Dog, including mapping and localization libraries, is available as part of the prairiedog-ros-pkg ROS repository.
Prairie Dog uses a variety of off-the-shelf robot hardware components: an iRobot Create base, a 4-DOF CrustCrawler AX-12 arm, a Hokuyo URG-04LX laser rangefinder, a Hagisonic Stargazer indoor positioning system, and a Logitech QuickCam 3000. The Correll Lab was able to build on top of existing ROS software packages, such as brown-ros-pkg's irobot_create and robotis packages, plus contribute their own in prairiedog-ros-pkg. Prairie Dog is also integrated with the OpenRAVE motion planning environment.
Starting in the Fall of 2010, RoadNarrows Robotics will be offering a Prairie Dog kit, which will give you all the off-the-shelf components, plus the extra nuts and bolts. Pricing hasn't been announced yet, but the basic parts, including a netbook, will probably run about $3500.
For more information, please see:
Photo: Prairie Dogs busy creating maps for kids and parents
The Care-O-bot 3 is a mobile manipulation robot designed by Fraunhofer IPA that is available both as a commercial robotic butler, as well as a platform for research. The Care-O-bot software has recently been integrated with ROS, and, in just short period of time, already supports everything from low-level device drivers to simulation inside of Gazebo.
The robot has two sides: a manipulation side and an interaction side. The manipulation side has a SCHUNK Lightweight Arm 3 with SDH gripper for grasping objects in the environment. The interaction side has a touchscreen tray that serves as both input and "output". People can use the touchscreen to select tasks, such as placing drink orders, and the tray can deliver objects to people, like their selected beverage.
The goals of the Care-O-bot research program are to:
- provide a common open source repository for the hardware platform
- provide simulation models of hardware components
- provide remote access to the Care-O-bot 3 hardware platform
Those first two goals are supported by the care-o-bot open source repository for ROS, which features libraries for drivers, simulation, and basic applications. You can easily download the source code and perform a variety of tasks in simulation, such as driving the base and moving the arm. These support the third goal of providing remote access to physical Care-O-Bot hardware via their webportal.
For sensing, the Care-O-bot uses two SICK S300 laser scanners, a Hokuyu URG-04LX laser scanner, two Pike F-145 firewire cameras for stereo, and Swissranger SR3000/SR4000s. The cob_driver stack provides ROS software integration for these sensors.
The Care-O-bot runs on a CAN interface with a SCHUNK LWA3 arm, SDH gripper, and a tray mounted on a PRL 100 for interacting with its environment. It also has a SCHUNK PW 90 and PW 70 pan/tilt units, which give it the ability to bow through its foam outer shell. The CAN interface is supported through several Care-O-bot ROS packages, including cob_generic_can and cob_canopen_motor, as well as wrappers for libntcan and libpcan. The SCHUNK components are also supported by various packages in the cob_driver stack.
The video below shows the Care-O-bot in action. NOTE: as the Care-O-bot source code is still being integrated with ROS, the capabilities you see in the video are not part of the ROS repository.
Junior is the Stanford Racing team's autonomous car that most famously finished in a close second at the DARPA Urban Challenge. It successfully navigated a difficult urban environment that required obeying traffic rules, parking, passing and many other challenges of real-world driving.
Those of you familiar with Junior are probably saying, "Junior doesn't use ROS! It uses IPC!"
That's mostly true, but researchers have recently started using ROS-based perception libraries in Junior's obstacle classification system.
From the very start, one of the goals of ROS was to keep libraries small and separable so that you could use as little, or as much, as you want. In the case of the tiny i-Sobot, a developer was able to just use ROS's PS3 joystick driver. When frameworks get too large, they becomes much more difficult to integrate with other systems.
In the case of Junior, Alex Teichman was able to bring his image descriptor library for ROS onto Junior. He has been using this library, along with ROS point cloud libraries, to develop Junior's obstacle classification system. Other developers on the team will also be allowed to choose ROS for their programs where appropriate.
You can find out more about Alex's image descriptor library at ros.org/wiki/descriptors_2d.
The Aldebaran Nao is a commercially available, 60cm tall, humanoid robot targeted at research lab and classrooms. The Nao is small, but it packs a lot into its tiny frame: four microphones, two VGA cameras, touch sensors on the head, infrared sensors, and more. The use of Nao with ROS has demonstrated how quickly open-source code can enable a community to come together around a common hardware platform.
The first Nao driver for ROS was released by Brown University's RLAB in November of 2009. This initial release included head control, text-to-speech, basic navigation, and access to the forehead camera. Just a couple of days later, the University of Freiburg's Humanoid Robot Lab used Brown's Nao driver to develop new capabilities, including torso odometry and joystick-based tele-operation. Development didn't stop there: in December, the Humanoid Robot Lab put together a complete ROS stack for the Nao that added IMU state, a URDF robot model, visualization of the robot state in rviz, and more.
The Nao SDK already comes with built-in support for the open-source OpenCV library. It will be exciting to see what additional capabilities the Nao will gain now that it can be connected to the hundreds of different ROS packages that are freely available.
Brown is also using open source and ROS as part of their research process:
Publishing our ROS code as well as research papers is now an integral part of disseminating our work. ROS provides the best means forward for enabling robotics researchers to share their results and more rapidly advance the state-of-the-art.
-- Chad Jenkins, Professor, Brown University
ROS is starting to gain traction in Japan thanks to some dedicated early adopters and community-based translation efforts. Last year, the ROS Navigation stack was ported to Tokyo University's Kawada HRP2-V robot, and now it's finding use with hobby robots as well.
ROS libraries are designed to be small and easily broken apart. In this case, a small use of ROS has led to the claim of "smallest humanoid robot controlled by ROS." As the video explains, ROS isn't running on the robot. The i-Sobot is hooked up to an Arduino, which talks to a PC, which uses the ROS PS3 joystick driver. We're always thrilled to see code being reused, whether it's something as big as the ROS navigation stack, or something as small as a PS3 joystick driver.
The video and demo was put together by "Ogutti", who has been maintaining a Japanese blog on ROS at ros-robot.blogspot.com/. Most recently, he has been blogging about using the Care-O-bot 3 simulation libraries.
In addition to Ogutti's Japanese ROS blog, you can go to ros.org/wiki/ja to follow the progress of the Japanese translation efforts for the ROS documentation.
With so many open-source repositories offering ROS libraries, we'd like to highlight the many different robots that ROS is being used on. It's only fitting that we start where ROS started with STAIR 1: STanford Artificial Intelligence Robot 1. Morgan Quigley created the Switchyard framework to provide a robot framework for their mobile manipulation platform, and it was the lessons learned from building software to address the challenges of mobile manipulation robots that gave birth to ROS.
Solving problems in the mobile manipulation space is too large for any one group. It requires multiple teams tackling separate challenges, like perception, navigation, vision, and grasping. STAIR 1 is research robot built to address these challenges: a Neuronics Katana Arm, a Segway base, and an ever-changing array of sensors, including a custom laser-line scanner, Hokuyo laser range finder, Axis PTZ, and more. The experience developing for this platform in a research environment provided many lessons for ROS: small components, simple reconfiguration, lightweight coupling, easy debugging, and scalable.
STAIR 1 has tackled a variety of research challenges, from accepting verbal commands to locate staplers, to opening doors, to operating elevators. You can watch the video of STAIR 1 operating an elevator below, and you can watch more videos and learn more about the STAIR program at stair.stanford.edu. You can also read Morgan's slides on ROS and STAIR from an IROS 2009 workshop.
In addition to the many contribution made to the core, open-source ROS system, you can also find STAIR-specific libraries at sail-ros-pkg.sourceforge.net/, including the code used for elevator operation.
The science of robotics has suffered from the inability of researchers to replicate each other's results. Replicating results begins by being able to run demonstrations in different laboratories, often on different hardware. The JSK lab at the University of Tokyo and Willow Garage have recent had some success in this area.
In March, Professors Inaba, Okada and four students visited Willow Garage to create demos on PR2 robots combining their infrastructure with ROS. At ICRA in May, Ken Conley from Willow Garage worked with the JSK team to bring ROS and those same demonstrations up on an HRP-2V robot from Kawada industries. The HRP-2V combines the torso of an HRP-2 walking humanoid with an omni-directional wheeled base, producing a platform that is similar in structure to the PR2, but with different sensor configuration, different kinematics, etc...
On both occasions, the combined team was able to complete their work in under a week, demonstrating that replicating results in robotics is possible at a relatively low cost.