Multimodal Robot Programming Interface Based on RGB-D Perception and Neural Scene Understanding Modules
Mar 04, 2024
About this article
Published Online: Mar 04, 2024
Page range: 29 - 37
Received: Jan 14, 2023
Accepted: May 24, 2023
DOI: https://doi.org/10.14313/jamris/3-2023/20
Keywords
© 2023 Bartłomiej Kulecki, published by Sciendo
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
In this paper, we propose a system for natural and intuitive interaction with the robot. Its purpose is to allow a person with no specialized knowledge or training in robot programming to program a robotic arm. We utilize data from the RGB-D camera to segment the scene and detect objects. We also estimate the configuration of the operator’s hand and the position of the visual marker to determine the intentions of the operator and the actions of the robot. To this end, we utilize trained neural networks and operations on the input point clouds. Also, voice commands are used to define or trigger the execution of the motion. Finally, we performed a set of experiments to show the properties of the proposed system.