Depth-sensor Safety Model for HRC

Main functionalities

Depth-based safety model for human-robot collaboration: Generates three different spatial zones in the shared workspace which are then online modelled, updated and monitored.

Technical specifications

The overall description of the hardware requirements and the different software nodes in the module are shown in Fig 2 The workspace is monitored by the Kinectv2 sensor at the frame rate of 30 Hz and the robot is UR5 from the Universal Robot family. Other depth sensors can be used with model as long as they support the same data structure of the depth cloud information. All the nodes exchange messages using the TCPROS transport layer where the nodes that are interested in data subscribe to the relevant topic and the nodes that generate data publish to the relevant topic. Arrows show the direction of the transmission.

A modified version of ur_modern_driver and univeral_robot ROS packages is used to establish communication channel between the robot low-level controller and the safety system node. Iai-kinect2 ROS package is used to receive data from the Kinect-2 sensor and further transmit it to the safety node which monitors safety violations and changes on the workspace.

The robot and depth sensor are connected to a single laptop computer that runs the ROS Melodic distribution on Ubuntu 18.04 and performs all computing. To successfully compile the module, OpenCV and PCL libraries must be installed in addition to standard C++ libraries. Currently Kinect v2 and Universal Robot 5 are supported.

Inputs and outputs

All the data is transferred via a standard ROS transport system with publish / subscribe semantics. Input and output data formats as well as the topic names are shown in Fig 5. and Fig 6. The vision-based safety system subscribes to topics where it can receive the color and depth image and the CameraInfo message which contains the sensor intrinsic parameters. In addition, the information from the JointState message is used to generate the safety hull. 

 

Figure 5 Data streams connected to Vision Safety system node 

 

The only output of the node is the stop command for the robot which is published over the /ur_driver/dashboard_command topic. 

 

Figure 6 Data streams that are created from Vision Safety system node 

Formats and standards

ROS communication layer with external image_transport package. Details about the message formats can be found from http://wiki.ros.org/sensor_msgs In addition ROS-industrial, OpenCV, PCL and C++ and Python standard libraries.

Owner (organization)

Tampere University, Finland
https://research.tuni.fi/productionsystems/

Trainings

To learn more about the solution, click on the link below to access the training on the Moodle platform
Depth-sensor safety model for HRC

Subscribe to newsletter

Please subscribe to our mailing list to be the first one informed about the new open calls, events and other hot news!


    Please check if the email you entered is correct
    All done!

    Your have successfully subscribed to Trinity newsletter.