Acceso abierto

Multimodal Robot Programming Interface Based on RGB-D Perception and Neural Scene Understanding Modules


In this paper, we propose a system for natural and intuitive interaction with the robot. Its purpose is to allow a person with no specialized knowledge or training in robot programming to program a robotic arm. We utilize data from the RGB-D camera to segment the scene and detect objects. We also estimate the configuration of the operator’s hand and the position of the visual marker to determine the intentions of the operator and the actions of the robot. To this end, we utilize trained neural networks and operations on the input point clouds. Also, voice commands are used to define or trigger the execution of the motion. Finally, we performed a set of experiments to show the properties of the proposed system.