Recently in misc Category
For posterity, here's a screenshot from Saturday night:
The success of answers.ros.org is thanks to its many contributors. Answers.ros.org has been running for a little bit over 2 years now and in that time, the community has answered 7283 questions, 73% of the questions asked. That's an average of 10 questions per day for the last two years (including weekends and holidays). Traffic has steadily grown, and recently, users have posted closer to 30 questions per day.
There are now 4399 registered users, 388 of whom have earned over 100 Karma, and 60 of whom have amassed 1000 Karma!
@lorenz @tfoote @dornhege and @joq deserve special recognition as each of them has earned over 10,000 Karma. Accumulating a Karma stash of this size requires such actions as their answers being upvoted one thousand times.
Congratulations to the answers.ros.org community for making the site the thriving resource that it is today. Keep up the fantastic work, and keep the questions--and answers--coming.
Announcement by Benjamin Pitzer (Bosch) to ros-users
Dear ROS Users,
The Bosch Research and Technology Center in Palo Alto, CA is looking for highly motivated robotics researchers and developers interested in contributing to ROS and being part of the PR2 Beta Program as part of our internship program.
We currently have the following openings:
Robotics Research Intern
Robotics Software Engineering Intern
Robotics Hardware Development Intern
Please use the "Email Resume" link on the job description page to apply.
Best regards / Mit freundlichen Grüßen,
Taylor Veltrop's announcement to ros-users
A year ago I released my first video of Kinect robotics when I loosely controlled a KHR (mini humanoid).
Now I have "completed" the robot avatar project. A treadmill, HMD, Wii remotes, Kinect, and NAO have all been integrated together using ROS to create a fully immersive experience. I really feel like my "self" is in the place of the robot while using this.
Here is a video demonstrating it, I use the interface to brush my cat remotely.
Actually it looks like the project is not really complete after all... Something I realized when filming this is that I need to add 2-way audio...
Hope you enjoy the video! Happy new year!
Our NAO humanoid plays Jingle Bells for Christmas on a glockenspiel / xylophone. The robot can read a single-track song derived from MIDI and plays it on the instrument. Implementation by Stefan Band and Jonas Delleske.
Merry Christmas from the Humanoid Robots Lab at the University of Freiburg!
Robotic Open Platform (ROP) aims to make hardware designs of robots available under an Open Hardware license to the entire robotic community. It provides CAD drawings, electric schemes and the required documentation to build their own robot. In the near future, standard electromechanic interfaces between the various robot components will be presented to enable the possibility to combine hardware components of various groups into one robot. By making the robots modular, users are encouraged to develop their own components that can be shared with the community.
In software, the Robot Operating System (ROS) is nowadays acknowledged as a standard software platform and is used by numerous (research) institutions. This open source software is available to everyone and by sharing knowledge with the community there is no need to 'reinvent the wheel', hence drastically speeding up development. Similarly, Robotic Open Platform (ROP) functions as a platform to share hardware designs available to all research groups within the community.
Announcement by Patrick Goebel of Pi Robot to ros-users
I have posted a couple of new ROS tutorials for those getting started with either speech recognition or controlling a pan & tilt head:
- ROS by Example: Speech Recognition and Text-to-Speech (TTS)
- ROS by Example: Head Tracking in 3D (Part 2)
Please let me know if you run into any bugs.
Thecorpora has made a great commitment to open source with Qbo, a robot with great robotics technology for everyday consumers. To work on integration ideas, Qbo and his friend, Francisco Paz, stopped by Willow Garage to meet the team and Willow Garage robots, such as the PR2. You can checkout photos of Qbo hanging out with PR2 and other robots at Thecorpora's blog.
Also on the way from Thecorpora is a new Android phone application for Qbo that provides telepresence; hear, see, and communicate with Qbo as if you were in the same room. Imagine getting Qbo to go where you want or direct it using Google's speech recognition software. This new app makes Qbo a telepresence in any room. For details, see their blog post and video.
Version 0.5.17 of rosinstall has been released. You can update using the commands below. This update contains the new experimental rosws tool, updated
--dev options for the roslocate tool, and numerous bug fixes. Please try it out and provide feedback on the new rosws tool and new roslocate with distro specific options.
sudo pip install -U rosinstall
sudo easy_install -U rosinstall
The good folks at turtlebot.eu have released EU-compatible designs for the TurtleBot powerboard as well as metric versions of the TurtleBot trays. They've also adapted the design for consumer Roombas for those that cannot purchase a Create in Europe.
For more information and to download the designs, please see the turtlebot.eu post
Thecorpora's Qbo showed off some cloud skills at the Campus Party in Valencia: the Qbo in Valencia was able to learn to recognize Tux, the Linux penguin, using a cloud-based object recognition system. Cloud-based recognition systems enable us to access seamlessly and collaboratively update knowledge about the world. During their live demo in Valencia, an engineer in Madrid was able to teach the image of Tux to the system, which was then accessed by the Qbo in Valencia. For more information on this demo and Qbo, you can checkout the Qbo blog.
I Heart Robotics/Engineering has been cranking out TurtleBot accessories as well as a some DIY instructions so that you can get the most out of your TurtleBot hardware -- whether it be new capabilities, or a little bit of flair.
Only a few days left to get your 15% discount from Clearpath Robotics for your own TurtleBot.
TurtleBot.com has launched! This new site provides access to TurtleBot information and also gives you new ways to access TurtleBot hardware. You can now order parts or assembled kits from several licensed vendors or take advantage of the open-source hardware designs to build your own robot from scratch.
Congrats to the GRASP Lab's PhillieBot for throwing out the first pitch at a Phillies Game! PhillieBot is the creation of Professor Vijay Kumar, Jordan Brindza, Jamie Gewirtz and Christian Moore. It features a Barrett arm on a Segway base and it runs ROS. They worked on several modifications to the Barrett arm to get up to pitching speeds, though the Phillies requested that they limit the pitch to a mild 30-40mph.
Congrats to the NimbRo@Home (University of Bonn) for their victory at the RoboCup German Open. During the competition, their Cosero and Dynamaid robots worked together to prepare breakfast. They demonstrated many difficult mobile manipulation tasks, like opening and retrieving orange juice from a refrigerator, pouring milk into a ceral bowl, fetching a spoon, and recognizing a pointing gesture. They were also able to deal with unknown environments.
The competition was a great demonstration of ROS software used to solve difficult challenges. ROS, PCL, and OpenRAVE were popular components in the competition -- five out of the eight robots used ROS-related software. The Nimbro@Home robots use ROS for communication as part of their four-layer modular control architecture, which is described in their 2011 paper.
The CCNY Robotics Lab was the first to bring us Kinect drivers for ROS, so it's not surprising that they have some awesome Kinect demos they have been working on.
In the above video, they show some of the latest results of their 6D pose estimation. Simply by moving the Kinect around an office, they are able to register multiple scans together and create a 3D model of the scene. Their code works with no extra sensors: they simply move around the Kinect freehand.
The work was done by Ivan Dryanovski, Bill Morris, Ravi Kaushik, and Dr. Jizhong Xiao. They are using custom RGB-D feature descriptors for the scan registration and use OpenCV, PCL, and ROS under the hood. They are working on releasing and documenting their code. In the meantime, you can checkout the rest of the cool software available in ccny-ros-pkg.
MIT's Robust Robotics Group, University of Washington, and Intel Labs Seattle teamed up to produce this demonstration of 3D map construction with a Kinect on a Quadrotor. Their demonstration combines onboard visual odometry for local control and offboard SLAM for map reconstruction. The visual odometry enables the quadrotor to navigate indoors where GPS is not available. SLAM is implemented using RGBD-SLAM.
A set of enterprising University of Waterloo undergrads have combined mobile robotics and 3D visual SLAM to produce 3D color maps. They mounted a Kinect 3D sensor on a Clearpath Husky A200 and used it to map cluttered industrial and office environment settings. The video shows off the impressive progress and capabilities of their "iC2020" module.
The iC2020 module was created by Sean Anderson, Kirk Mactavish, Daryl Tiong, and Aditya Sharma as part of their fourth-year design project at the University of Waterloo. They formed their group with the goal of using PrimeSense technology to create a globally consistent dense 3D color maps.
Under the hood they use ROS, OpenCV, GPUSURF, TORO to tackle the various challenges of motion estimation, mapping, and loop closure in noisy environments. Their software is capable of allowing real-time views of the 3D environment as it is created. ROS is supported out-of-the-box on the Clearpath Husky, and Sean Anderson noted that "ROS was crucial to the project's success" due to its ease of use and flexibility.
Their source code is available under a Creative Commons-NC-SA license at the ic2020 project on Google Code.
- Optical Flow using Shi Tomasi Corners
- Visual Odometry using Shi Tomasi and GPU SURF
- Features undergo RANSAC to find inliers (in green)
- Least Squares is used across all inliers to solve for rotation and translation
- Loop closure detection using a dynamic feature library
- Global Network Optimization for loop closure
More information: iC 20/20
OTL has been a frequent contributor of great Roomba hacks, and this one is no exception. This time he's used a Kinect and a Roomba bluetooth connector to take back control of the vacuum. You can find out more in his blog post (Japanese). His blog is a great Japanese-language resource for getting into ROS.
The Chair of Automation Technology at Chemnitz University of Technology shows just how versatile a Kinect on a quadrotor can be. Their entry, "Autonomous corridor flight of a UAV using the Kinect sensor", uses the Kinect to find the ceiling, walls, and floor of a corridor. Once the quadrotor knows the geometric structure of the corridor, it can happily fly down the middle to get where it needs to go.
Their demo is built on an AscTec Pelican with a stripped-down Kinect. To handle the rest of the autonomous flight needs, they use a ADNS 3080 optical flow sensor for position and velocity control, and a SRF10 sonar sensor for altitude control. Sample-consensus algorithms from PCL are used to convert the 3D point cloud data into the estimated positions of these surfaces. Remarkably, they managed to make all of this run on an Atom processor.
Chris Burbridge and Lorenzo Riano from the University of Ulster Intelligent Systems Research Centre used the Kinect to turn their robot into a mobile 3D person scanner. A Kinect is great for collecting 3D data, but sticking it on wheels is even better because you can collect data from multiple points of view and construct full 3D models.
Their demo uses the Kinect at both the skeleton tracking and 3D point cloud level. The OpenNI skeleton tracker is used to identify the position of the person in the room, and then the 3D point cloud data is used to start building the full 3D scan. Once all of the point clouds are collected, they use PCL to create a unified 3D model.
The UU robot is a custom MetraLabs Scitos G5 mobile robot with a Kinect mounted at the end of a Schunk 7 DOF manipulator, but their code should be adaptable to other robot platforms.
Zoltan-Csaba Marton and Dejan Pangercic of TUM's "Teleop Kinect Cleanup" entry into the ROS 3D Contest is a couple of demos rolled into one. Using their entry, you can point at an object on a table, then, in the virtual rviz display, move that object somewhere else like a Jedi. You start with a world that looks like your own, but by the time you're done, you've rearranged a new virtual world to your liking.
That's not all. They've also figured out how to make this useful for giving commands to a robot. After you move around a cup in your virtual world to your liking, a command to move the cup can be passed to a robot. Thus, once you've re-arranged your virtual world, it becomes the job of the robot to make the real world look like your virtual world.
If you want to see their robots in action, you can checkout this video of TUM's Rosie and PR2 making pancakes together.
Michael Ferguson is a prolific contributor to ROS. His entry into the ROS 3D Contest is "Improved AR Markers for Topological Navigation". AR markers are a cheap and effect way to find the position of objects in an image using cheap cameras. Michael recognized the opportunity to combine these markers with the Kinect, which has both camera and depth data, to transform them into markers in three dimensions. You can even use this to find the position of the robot by attaching the markers to known locations in your map.
We encourage you to check out the many different robots that Michael is building, from the iRobot Create and Dynamixel AX-12-based Nelson to the up-and-coming Create + Kinect + tripod Trike. The software for the contest entry along with these robots can be found in albany-ros-pkg, which also contains a Neato XV-11 driver for ROS.
Patrick Bouffard's "Quadrotor Altitude Control and Obstacle Avoidance" was featured back in December, when he first made waves on the Internet by mounting a Kinect to a quadrotor and flying it around his lab. The Kinect was used to detect the altitude as well as avoid obstacles.
Patrick has updated his video for the ROS 3D contest. He has also released starmac-ros-pkg, which contains the software used in his Berkeley lab to get these quadrotors in the air. starmac-ros-pkg includes ROS drivers for Vicon motion capture systems as well as an abstraction of the AscTec autopilot driver. It's a great complement to ccny-ros-pkg, which provides AscTec quadrotor drivers, computer vision libraries, and other tools.
Colin Lea's Anaglyph Viewer entry into the ROS 3D Contest brings a bit of 3D retro to our entries. Colored glasses for seeing 3D are an inexpensive way of seeing 3D content on a 2D screen. If you are able to see the data in 3D, you can become more immersed in the data that is coming from the Kinect. For example, you can build more effective teleoperation cockpits that let you take advantage of your ability to see depth. Add more Kinect cameras and you can start becoming fully immersed in a 3D world.
PS: Our skateboarding turtle says thanks!
The Kinemmings entry by Alberto Jose Ramirez Valadez, Jonathan Rafael Patino Lopez and Marcel Stockli Contreras, is a take on the classic Lemmings game. Now, it's up to you and your body to guide the Kinemmings safely to their exit.
Kinemmings has the distinction of being the only game entry into the ROS 3D contest. In fact, as far as we know, it may be the first game package in all of ROS. We appreciate it as it means we can now tell our boss that we're "working on ROS".
You've have your Kinect and want to mount it on your robot, but now you're faced with a challenge: you need to precisely determine the mounting point of your Kinect so that it the data from it can be interpreted correctly; e.g. if you want to use it to run autonomous navigation.
The "Automatic Calibration of Extrinsic Parameters" entry from François Pomerleau, Francis Colas and Stéphane Magnenat of the Autonomous Systems Lab at ETHZ makes solving this problem easy for users and does much more. If you run their software with the Kinect mounted, it will output the tf transform between your
base_link and the camera, making configuration easy.
They also released several lower-level libraries to help build other applications on top: libnabo for running fast K Nearest Neighbor, and libpointmatcher, a modular ICP library. These are important for building tracking applications, as shown in the video, as well as building SLAM and other systems.
Patrick Goebel is the creator of Pi Robot, which is a custom-built, robotis-based hobby robot. Patrick has been a frequent contributor to the Trossen and ROS communities, including writing a detailed essay for hobbyists getting into ROS.
His entry for the ROS 3D contest builds on Taylor Veltrop's teleop control to adapt it for the Pi Robot, as well as add in a base controller and the ability to define new gestures for control. Patrick has also contributed a serializer package for those wishing to use the Robotis Serializer microcontroller in ROS. Pi Robot may be one of a kind, but, thanks to Patrick's contributions, you have the software you need to build your own.
Patrick will be giving the featured presentation at tonight's Homebrew Robotics Club meeting.
Taylor Veltrop had the first ROS 3D contest entry with his teleoperation control of a humanoid KHR/Roboard robot. He wasn't content to leave it at that: he beefed up his teleoperation system with Wiimote and leg-based control. He also is running it on an Aldebaran Nao.
One of the difficulties in using the skeleton tracking libraries with the Kinect is that you do not get much information about the hands of the operator. For those trying to use the skeleton tracking to control a robot's arms, this creates a pickup problem: you can get the arm to location you wish to grab an item, but you don't have the control you need over the angle of the hand and the opening and closing of the gripper to complete the task.
Taylor solves this by enabling you to use Wiimotes in each hand. With this additional controls, the operator can seamlessly use the wiimotes to transmit the additional information about the correct hand position, and you can use the buttons on the Wiimote to perform additional operations, like opening and closing grippers.
Taylor also collaborated with Patrick Goebel to add in leg controls for moving a robot. Placing one leg forwards or backwards move the robot in that direction. Placing a leg to the side makes the robot turn.
You can watch Taylor's new video above, where he puts the Nao teleop through it's paces. If you have ever wanted to see a Nao wield a knife, play chess, or grab a tissue out of a box, check it out.
Halit Bener SUAY entered the ROS 3D contest with this entry that demonstrates teleoperation of an Aldebaran Nao using a Kinect. This is the not the only entry to tackle teleoperation, but it adds its own unique twists. Most notably, there are pre-defined gestures that enable the operator to switch between different modes of control. One leg controls starting and stopping the robot. Another enables the operator to switch between controlling the body and the head. Your arms can either directly control the robot arms are issue other commands, like directing the robot's gaze. All-in-all, it's a great demo of how we can go completely remoteless and still control a complex, walking robot like the Nao.
Credit: Nikolas Engelhard, Felix Endres, Juergen Hess, Juergen Sturm, Daniel Kuhner, Philipp Ruchti, and Wolfram Burgard
The University of Freiburg team has put together an impressive 6D-SLAM library for entry into the ROS 3D Contest. By taking advantage of the additional 3D data that a Kinect provides, they've released a new benchmark for the state-of-the-art in the field. It's also a great demo that we can all try ourselves: pick up your Kinect, move it around, and build 3D models of your world.
We're now busy judging the eighteen awesome entries to the ROS 3D Contest. There's everything from teleoperation to games to libraries for registration and calibration. It's going to be tough choosing who gets a prize on them.
You can go ahead and checkout the entries yourself. In most cases, you should be able to even download and try them out on your own Kinect or PrimeSense device.
While we tally the results, we'll spotlight the entries here.
First off are Garratt Gallagher's entries. Garratt was our most prolific entrant and produced a total of five separate entries. Each is worth it's own blog post, and many of them already have been featured here:
We're grateful that Garratt has taken the time to, not only enter the contest, but go the extra mile to make sure that others can try out his libraries and build on his creative ideas. If you like what you see, you should consider helping out his Bilibot project, which is a low-cost Kinect + Create platform.
Garratt's newest entry is "Customizable Buttons". Using the Kinect, you can draw on a piece of paper to create your own music board. It's a lot of fun, as you'll see in the video:
We think that ROS and the PR2 are great tools for educators. Both platforms allow students to focus on building the relevant parts of a system while incorporating less topical components from the open source community. Students get started faster and complete more impressive projects. Even more importantly, students can take components built in ROS to their next course, research project or job without worrying about licensing.
We've started a wiki page to list courses using ROS or the PR2, and to discuss teaching-related issues. Here are some course examples that you can use for inspiration:
- CoTeSys-ROS Fall School on Cognition-enabled Mobile Manipulation (TU Munich)
- PR2 Beta Workshop (Willow Garage)
University (Undergraduate & Graduate) Courses
- CSE553: Mobile Robotics (Washington University in St Louis)
- CS1480: Building Intelligent Robots (Brown University)
- CS225B: Robot Programming Laboratory (Stanford University)
- CS324: Robot Perception (Stanford University)
- MEAM620: Robotics (University of Pennsylvania)
- Advanced Robotics Systems (KU Leuven)
- Autonomous Vehicles, part of the Freshman Research Initiative (University of Texas at Austin)
If you're teaching a course using ROS or the PR2, please post a link at ros.org/wiki/Courses. If you have advice on setting up labs, course computers, or any other teaching-related topic, post those too. By sharing material, we'll all create effective courses more quickly.
Taylor Veltrop has made the first entry to our ROS 3D Contest. He uses the Kinect, and NITE to put a Kondo-style humanoid through pushups, waves, and other arm-control gestures. Great work! We look forward to seeing more entries.
Please take a look at my entry in the Kinect/RGB-D contest! I'm really happy with how it's turned out so far.
It's a small humanoid hobby robot by Kondo with a Roboard running ROS. The arms are controlled master/slave style over the network by a Kinect.
For Kinect/OpenNI users and VSLAM researchers, we're working on integrating Hauke Strasdat's ScaViSLAM framework into ROS. ScaViSLAM is a a general and scalable framework for visual SLAM and should enable exciting applications like constructing 3D models of environments, creating 3D models of objects, augmented reality, and autonomous navigation.
We hope to release the ScaViSLAM library in Spring of 2011.