Recently in components Category

The Dawn of a New NVIDIA Jetson: the Uniquely Capable ROSbot Brain

From Barrett Williams

The brand-new NVIDIA Jetson TX2 Developer Kit runs ROS Kinetic!

Through its 256-core Pascal-based GPU that supports CUDA 8.0 and cuDNN 5.1, Jetson TX2 executes object classification, SLAM, and localization with low latency, short control loops, and at high framerates, all on a low power budget. In addition to 4 ARMv8 Cortex A57 cores, Jetson TX2 also sports two Denver cores for additional performance in single-threaded workloads.

Just as the Jetson TX1 before it, NVIDIA's new Jetson TX2 also supports ROS Kinetic Kame on Ubuntu 16.04 LTS Xenial. Thanks to a partnership with OSRF, roboticists can now deploy Deep Learning on the most power efficient embedded platform available today. (TX2 sports a fresher 4.4 Linux kernel, which TX1 will receive in the coming months.)

Connect sensors and other peripherals over USB 3.0, Gigabit Ethernet (now built into the TX2 chip), PCI-Express (x4 or x2 + x1), I2C, CAN bus, or UART. Find custom carriers from Auvidea and Connect Tech to integrate in any chassis that can accommodate a credit card.

From the Toyota HSR to Fellow Robots' NAVii, some of the most sophisticated robots, drones, and intelligent machines on the market today run ROS on Jetson. What will you build around ROS Kinetic and Jetson TX2?

Learn here:

https://devblogs.nvidia.com/parallelforall/jetson-tx2-delivers-twice-intelligence-edge

https://github.com/dusty-nv/rosdeeplearning

https://github.com/mit-racecar/

https://devtalk.nvidia.com/default/board/188/jetson-tx2/

https://developer.nvidia.com/embedded/twodaystoademo

Buy here for $599 ($299 academic discount; EU pricing and discounts available now, APAC in April):

https://developer.nvidia.com/embedded/buy/jetson-tx2-devkit

From Limor Schweitzer and his team at RoboSavvy:


Small step for Virtual Reality (VR), big step for autonomous robots. One of the key issues with autonomous robot applications is indoor localization. HTC Vive has singlehandedly solved this age-long problem.

This 800$ system (will go down to 200$ in a few months once lighthouses and base stations are available without the headset in addition to minuscule lighthouse sensors) is comparable to a 150 000$ Ir marker multi-camera system. The Vive gives you 60fps, 0.3mm resolution, across any size internal volume (currently a 5m cube box but will be extendable) So unless you are doing indoor 3D drones, you don't need more than 60Hz and a camera system will give ~cm resolution. No other indoor localization system can get anywhere close to the Vive specs.

Initially the idea was to just use this to calibrate our robot's odometry/localization mechanisms (visual, wheels, LIDAR, IMU) However, there was this unexpected turn of events the past month whereby Valve is opening up the technology for non-VR applications so it may actually be possible to rely on this for real indoor applications and use the other forms of localization as backup.

We ended up integrating the Vive API for tracking the handheld devices with ROS. This provides ROS robots with the most precise absolute indoor localization reference. Source code is available at:

https://github.com/robosavvy/vive_ros

Announcing package for the Schunk lightweight robot arm LWA4P

From Georg Heppner via ros-users@

it is my pleasure to announce the schunkcanopendriver[1] package that you can use to control the lightweight robot arm LWA4P [2] produced by Schunk.

The LWA4P or Powerball arm is, as probably most of you know, a lightweight robot arm, especially suited for mobile applications due to the internal controllers and no need for a separate controller. The package was specifically designed for the LWA4P and currently supports interpolated position as well as profile position mode, full emcy messages,pdo reconfiguration and much more. It comes with a detailed 3D-Model, urdf, and everything else you need. The package was tested on multiple platforms with indigo, jade and kinetic and already worked well during public exhibitions such as the Schunk expert days and others. A comprehensive documentation is already provided on the wiki and should allow you to easily use the package in your projects.

The package is currently available via git [3] and package manager. For older distributions some workarounds of ros control are required. It is designed to work with the peak can adapters in chardev mode.

Please let me know if you have any feedback, suggestions or any trouble using the package.

Best Regards Georg Heppner

[1] http://wiki.ros.org/schunk_canopen_driver [2] http://mobile.schunk-microsite.com/en/produkte/products/powerball-lightweight-arm-lwa-4p.html [3] https://github.com/fzi-forschungszentrum-informatik/schunk_canopen_driver

ROS on Lego Mindstorms EV3

| No Comments | No TrackBacks
From Christian Holl via ros-users@

If you are interested in running ROS on the Lego EV3 Brick,
then you might want to checkout my EV3 Yocto Image (alpha) and the node I created for it.

Getting Started is here: http://hacks4ros.github.io/h4r_ev3_ctrl/

Sources for the node you can find here:

https://github.com/Hacks4ROS/h4r_ev3_ctrl

And the Yocto Layer I created  for it is there:

https://github.com/Hacks4ROS/meta-h4r-ev3

The node uses ROS Control to initialize and control the joints and read sensors.
You should be able to use the standard ROS Control controllers for velocity and position for the joints.

Also I already provided a controller for every EV3 Sensor. The basic idea is that
the ev3_manager node runs on the EV3 while ROS Master and everything else
runs on your PC and you load controllers with launch files on your PC.
That's because the EV3 has a tiny RAM so even running ROS Master, seems to be too much for it,
and also you do not have to compile software for it.

If you have problems, suggestions or comments visit my Github Repo or write me directly.

I hope you have fun with it and it is useful for you.


btw: The ROS packages are done by the meta-ros layer so currently the ROS version on the brick is indigo
From Wagdi Ben yaala via ros-users@

We just published 3 packages for interfacing, using Modbus TCP communication,your ROS workstation with some industrial component like the famous In-Sight camera from Cognex and the Siemens S7 PLC.

You'll find a link and a quick tutorial for all those three packages:
http://www.generationrobots.com/blog/en/2015/04/cognex-siemens-plc-modbus-pkg/

Modbus package : http://www.generationrobots.com/en/content/87-modbus-package
Cognex In-Sight Modbus package : http://www.generationrobots.com/en/content/88-modbus-cognex-in-sight
Siemens S7 PLC Modbus package : http://www.generationrobots.com/en/content/89-plc-siemens-modbus-ros-package

Here is also the link to the ros wiki :
http://wiki.ros.org/modbus

Erle Robotics brain and vehicles

| No Comments | No TrackBacks
From Víctor Mayoral Vilches of Erle Robotics via ros-users@

Hi everyone,

I'd like to introduce Erle-Brain (https://erlerobotics.com/blog/product/erle-brain/) Linux autopilot, a ROS-powered embedded computer that allows to build different kind of drones and robots.

Using Erle-Brain we've build several vehicles (Erle-Copter, Erle-Plane, Erle-Rover, ...) displayed at http://wiki.ros.org/Robots and we keep exploring new paths. The brain runs the APM software autopilot (in Linux) which connects with ROS through the mavros bridge allowing to control the robots simply publishing to ROS topics.

This ROS package (https://github.com/erlerobot/ros_erle_takeoff_land) shows a simple example on how to autonomously take off and land a VTOL powered by Erle-Brain.

We are really excited to see what people can do with our Brain and vehicles so we've decided to launch a program that offers discounts for educational and research purposes called dronEDU (dronedu.es).
Feel free to get in touch with us if you are interested.

Visualizer of delta robots using ROS and EtherCAT

| No Comments | No TrackBacks

From Diego Escudero

Last year we started thinking about using ROS as development framework to make our life easy when we develop our own machine controller, the NJ controller that integrates Robotics functions.

delta_conveyor.jpg

During the initial analysis we defined four use cases and decided to implement one of them. The selected one was the visualizer of parallel robots that will be used during the system testing phase of the controller. The work was done by F. Martí during his winter internship at OMRON Industrial Automation.

The NJ controller is programmed using the SYSMAC Studio IDE and controls 64 servo-drives through the EtherCAT field bus. Due to that, the visualizer is installed on a normal computer with an EtherCAT slave card; and it runs on Ubuntu 14.04 LTS and ROS Indigo.

The visualizer is composed by two main parts:

  • The simulator of EtherCAT slaves.
  • The ROS meta-package that visualizes the family of delta robots.

delta_ros.png

The EtherCAT slave simulator reads the axes commanded positions sent by the controller, forwards them to a ROS node, and closes the loop providing feedback to the controller. After that the ROS meta-package calculates the full positions of the delta robots based on the provided positions and visualizes them using rviz.

In this video below you can see the system working at the Robotics Lab of the Motion Development Team that is placed in Barcelona (Spain).

Visualiser of Delta robots using ROS and EtherCAT from FelipMarti on Vimeo.

Find this blog and more at planet.ros.org.


Monthly Archives

About this Archive

This page is an archive of recent entries in the components category.

autonomous cars is the previous category.

field robots is the next category.

Find recent content on the main index or look in the archives to find all content.