Utilizing Relevant RGB–D Data to Help Recognize RGB Images in the Target Domain
Published Online: Sep 28, 2019
Page range: 611 - 621
Received: Nov 30, 2018
Accepted: Apr 29, 2019
DOI: https://doi.org/10.2478/amcs-2019-0045
Keywords
© 2019 Depeng Gao et al., published by Sciendo
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.
With the advent of 3D cameras, getting depth information along with RGB images has been facilitated, which is helpful in various computer vision tasks. However, there are two challenges in using these RGB-D images to help recognize RGB images captured by conventional cameras: one is that the depth images are missing at the testing stage, the other is that the training and test data are drawn from different distributions as they are captured using different equipment. To jointly address the two challenges, we propose an asymmetrical transfer learning framework, wherein three classifiers are trained using the RGB and depth images in the source domain and RGB images in the target domain with a structural risk minimization criterion and regularization theory. A cross-modality co-regularizer is used to restrict the two-source classifier in a consistent manner to increase accuracy. Moreover, an