DynCoMM: Dynamic Collaborative control of Mobile Manipulators for complex picking
Name of demonstration
DynCoMM: Dynamic Collaborative control of Mobile Manipulators for complex picking
Main objective
The general objective of the project is to develop a novel computer vision guided collaborative control solution for mobile manipulators. The proposed technological modules are shown. The integration and synchronization of control and computer vision modules will be done in an external real-time controller that will manage and control the multiple agents and systems involved in the operation (i.e. mobile platforms, collaborative manipulators, perception devices and/or robotic tools).
The general objective of the project is to develop a novel computer vision guided collaborative control solution for mobile manipulators. This concept is represented in Figure 1, where the proposed technological modules are shown.
The integration and synchronization of control and computer vision modules will be done in an external real-time controller that will manage and control the multiple agents and systems involved in the operation (i.e. mobile platforms, collaborative manipulators, perception devices and/or robotic tools).
Short description
DynCoMM is an advanced robotic solution for external control of mobile platforms to perform dynamic operations in collaborative scenarios. The objective is to control and synchronize both application operation and platform dynamics simultaneously, increasing the productivity and flexibility of potential applications. Thus, three prominent aspects must be considered: the environment and process analysis, the collaborative integral control of the mobile manipulator, and the application integration. DynCoMM solution includes the integration of computer vision algorithms for both the application and the environment reconstruction.
The developed RealTime (RT) external control system will use the obtained information to perform complex picking applications in industrial collaborative scenarios. More specifically, the demonstrator will be focused on complex manufacturing processes where manual operations are still needed, facing technology challenges in the field of robotics and vision-based Human-Robot Collaboration (HRC). The project presents a novel solution for mobile manipulators, embedding the robot manipulator into a mobile platform to combine their benefits. The combination of both actuators increases productivity, as the mechanism will be synchronized to operate in RT with the manufacturing line workers, creating a human-machine collaborative environment. This high level of coordination between the mobile manipulator and the human employer is obtained by integrating a novel artificial vision framework designed to recognize in RT the environment and make decisions about the mobile platform position to maintain human safety without halting the production.
This demonstration has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 825196.
Owner of the demonstrator
Electrotécnica Alavesa
Responsible person
ELECTROTÉCNICA ALAVESA
Oihane Mayo Ijurra,
Engineer at R&D department E-mail: o.mayo@aldathink.es
NACE
M71 - Architectural and engineering activities; technical testing and analysis
Keywords
Robotics, Vision System, Machine Learning, human-robot collaboration, Collaborative Robotics.
Benefits for the users
Transfer repetitive operations from humans to robots.
Automate complex production processes through HRC environments.
Implement higher automation and efficiency levels for increased competitiveness employing Computer Vision and AI-based control technologies for mobile manipulators.
Solve picking limitations in dynamic and mobile manipulators.
Progress in the use of technologies that allow self-adaptability and flexibility in industrial processes
Innovation
Novel mobile platform designed to coordinate the robotic manipulator and the mobile platform through a novel artificial vision algorithm in RT.
New platform capable of performing dynamic manipulation of complex pieces in collaborative and hazardous environments.
Flexible and easy integration of RT controllers and computer vision solutions.
Risks and limitations
Failure in the integration of software modules and developments. Inaccurate specifications of use case scenario or demonstrator platform. Inaccuracies in the reconstruction of systems through Computer Vision. Synchronization inaccuracies between robot and platform systems. Loss of consortium partnership. Failure to achieve industrial relevance. Real-Time limitations in commercial Hardware. Failure to complete the demonstration. Failure to manage the project effectively.
Technology readiness level
6 - Safety approved sensors and systems are commercially available
Sectors of application
Automotive, Aerospace, Medical, Electrical.
Potential sectors of application
Cross-sectoral approach through mechanical operation enhancement.
Patents / Licenses / Copyrights
Hardware / Software
Hardware:
KMR
External CPU
Computer Vision Cameras
iiwa 14 R820
Battery Charging Station
External control unit (ROS)
Operation Interface – HMI
Software:
ROS
TRINITY Modules
Sunrise
Photos
Depth-sensor Safety Model for HRC
Depth-based safety model for human-robot collaboration: Generates three different spatial zones in the shar...
LEARN MORE
Object Classification
A deep convolutional neural network (CNN) is used to classify and sort objects. This is a robust and fast i...
LEARN MORE
Object Detection
The object detection module is used to perceive the changing environment and modify systems actions accordi...
LEARN MORE
Trainings
To learn more about the solution, click on the link below to access the training on the Moodle platform
Dynamic collaborative control of mobile manipulators for complex picking