[
Andriluka, M., Pishchulin, L., Gehler, P. and Schiele, B. (2014). 2D human pose estimation: New benchmark and state of the art analysis, IEEE Conference on Computer Vision and Pattern Recognition, Columbus, USA, pp. 3686–3693.
]Search in Google Scholar
[
Clarembaux, L.G., Pérez, J., Gonzalez, D. and Nashashibi, F. (2016). Perception and control strategies for autonomous docking for electric freight vehicles, Transportation Research Procedia 14: 1516–1522.10.1016/j.trpro.2016.05.116
]Search in Google Scholar
[
Dreossi, T., Ghosh, S., Yue, X., Keutzer, K., Sangiovanni-Vincentelli, A. and Seshia, S.A. (2018). Counterexample-guided data augmentation, Proceedings of the 27th International Joint Conference on Artificial Intelligence, Stockholm, Sweden, pp. 2071–2078.
]Search in Google Scholar
[
Fan, Y. and Zhang, W. (2015). Traffic sign detection and classification for advanced driver assistant systems, International Conference on Fuzzy Systems and Knowledge Discovery (FSKD), Zhangjiajie, China, pp. 1335–1339.
]Search in Google Scholar
[
Gawron, T., Mydlarz, M. and Michalek, M.M. (2019). Algorithmization of constrained monotonic maneuvers for an advanced driver assistant system in the intelligent urban buses, IEEE Intelligent Vehicles Symposium, Paris, France, pp. 232–238.
]Search in Google Scholar
[
Geiger, A., Lenz, P. and Urtasun, R. (2012). Are we ready for autonomous driving? The KITTI vision benchmark suite, Conference on Computer Vision and Pattern Recognition, Rhode Island, USA, pp. 3354–3361.
]Search in Google Scholar
[
Girshick, R., Donahue, J., Darrell, T. and Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation, IEEE Conference on Computer Vision and Pattern Recognition, Columbus, USA, pp. 580–587.
]Search in Google Scholar
[
Hartley, R.I. and Zisserman, A. (2004). Multiple View Geometry in Computer Vision, Cambridge University Press, Cambridge.10.1017/CBO9780511811685
]Search in Google Scholar
[
He, K., Zhang, X., Ren, S. and Sun, J. (2016). Deep residual learning for image recognition, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, USA, pp. 770–778.
]Search in Google Scholar
[
Kendall, A., Grimes, M. and Cipolla, R. (2015). Posenet: A convolutional network for real-time 6-DOF camera relocalization, IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, pp. 2938–2946.
]Search in Google Scholar
[
Kim, J., Cho, H., Hwangbo, M., Choi, J., Canny, J. and Kwon, Y.P. (2018). Deep traffic light detection for self-driving cars from a large-scale dataset, International Conference on Intelligent Transportation Systems (ITSC), Maui, USA, pp. 280–285.
]Search in Google Scholar
[
Kukkala, V.K., Tunnell, J., Pasricha, S. and Bradley, T. (2018). Advanced driver-assistance systems: A path toward autonomous vehicles, IEEE Consumer Electronics Magazine 7(5): 18–25.10.1109/MCE.2018.2828440
]Search in Google Scholar
[
Lepetit, V., Moreno-Noguer, F. and Fua, P. (2009). EPnP: An accurate o(n) solution to the PNP problem, International Journal of Computer Vision 81(2): 155–166.10.1007/s11263-008-0152-6
]Search in Google Scholar
[
Lim, K.L. and Bräunl, T. (2020). A review of visual odometry methods and its applications for autonomous driving, arXiv abs/2009.09193.
]Search in Google Scholar
[
Liu, J.-J., Hou, Q., Cheng, M.-M., Wang, C. and Feng, J. (2020). Improving convolutional networks with self-calibrated convolutions, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10093–10102, (online).
]Search in Google Scholar
[
Lu, X. X. (2018). A review of solutions for perspective-n-point problem in camera pose estimation, Journal of Physics: Conference Series 1087(5): 052009.10.1088/1742-6596/1087/5/052009
]Search in Google Scholar
[
Luo, R.C., Liao, C.T., Su, K.L. and Lin, K.C. (2005). Automatic docking and recharging system for autonomous security robot, IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, Canada, pp. 2953–2958.
]Search in Google Scholar
[
Marchand, E., Spindler, F. and Chaumette, F. (2005). ViSP for visual servoing: A generic software platform with a wide class of robot control skills, IEEE Robotics and Automation Magazine 12(4): 40–52.10.1109/MRA.2005.1577023
]Search in Google Scholar
[
Michałek, M. and Kiełczewski, M. (2015). The concept of passive control assistance for docking maneuvers with n-trailer vehicles, IEEE/ASME Transactions on Mechatronics 20(5): 2075–2084.10.1109/TMECH.2014.2362354
]Search in Google Scholar
[
Michałek, M.M., Gawron, T., Nowicki, M. and Skrzypczyński, P. (2021). Precise docking at charging stations for large-capacity vehicles: An advanced driver-assistance system for drivers of electric urban buses, IEEE Vehicular Technology Magazine 16(3): 57–65.10.1109/MVT.2021.3086979
]Search in Google Scholar
[
Michałek, M.M., Patkowski, B. and Gawron, T. (2020). Modular kinematic modelling of articulated buses, IEEE Transactions on Vehicular Technology 69(8): 8381–8394.10.1109/TVT.2020.2999639
]Search in Google Scholar
[
Miseikis, J., Rüther, M., Walzel, B., Hirz, M. and Brunner, H. (2017). 3D vision guided robotic charging station for electric and plug-in hybrid vehicles, arXiv abs/1703.05381.
]Search in Google Scholar
[
MMPose (2020). OpenMMLab pose estimation toolbox and benchmark, https://github.com/open-mmlab/mmpose.
]Search in Google Scholar
[
Mur-Artal, R. and Tardós, J.D. (2017). ORB-SLAM2: An open-source SLAM system for monocular, stereo and RGB-D cameras, IEEE Transactions on Robotics 33(5): 1255–1262.10.1109/TRO.2017.2705103
]Search in Google Scholar
[
Nowak, T., Nowicki, M.,Ćwian, K. and Skrzypczyński, P. (2019). How to improve object detection in a driver assistance system applying explainable deep learning, IEEE Intelligent Vehicles Symposium, Paris, France, pp. 226–231.
]Search in Google Scholar
[
Nowak, T., Nowicki, M.,Ćwian, K. and Skrzypczyński, P. (2020). Leveraging object recognition in reliable vehicle localization from monocular images, in C. Zieliński et al. (Eds), Automation 2020: Towards Industry of the Future, Springer, Cham, pp. 195–205.10.1007/978-3-030-40971-5_18
]Search in Google Scholar
[
Olson, C. and Abi-Rached, H. (2010). Wide-baseline stereo vision for terrain mapping, Machine Vision and Applications 21(5): 713–725.10.1007/s00138-009-0188-9
]Search in Google Scholar
[
Papandreou, G., Zhu, T., Kanazawa, N., Toshev, A., Tompson, J., Bregler, C. and Murphy, K. (2017). Towards accurate multi-person pose estimation in the wild, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, USA, pp. 3711–3719.
]Search in Google Scholar
[
Pérez, J., Nashashibi, F., Lefaudeux, B., Resende, P. and Pollard, E. (2013). Autonomous docking based on infrared system for electric vehicle charging in urban areas, Sensors 13(2): 2645–2663.10.3390/s130202645364936623429581
]Search in Google Scholar
[
Petrov, P., Boussard, C., Ammoun, S. and Nashashibi, F. (2012). A hybrid control for automatic docking of electric vehicles for recharging, IEEE International Conference on Robotics and Automation, St. Paul, USA, pp. 2966–2971.
]Search in Google Scholar
[
Rahmat, R., Dennis, D., Sitompul, O., Sarah, P. and Budiarto, R. (2019). Advertisement billboard detection and geotagging system with inductive transfer learning in deep convolutional neural network, TELKOMNIKA (Telecommunication Computing Electronics and Control) 17(5): 2659.10.12928/telkomnika.v17i5.11276
]Search in Google Scholar
[
Redmon, J., Divvala, S., Girshick, R. and Farhadi, A. (2016). You only look once: Unified, real-time object detection, IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, pp. 779–788.
]Search in Google Scholar
[
Ren, S., He, K., Girshick, R. and Sun, J. (2015). Faster R-CNN: Towards real-time object detection with region proposal networks, Advances in Neural Information Processing Systems, Montreal, Canada, pp. 91–99.
]Search in Google Scholar
[
Royer, E., Lhuillier, M., Dhome, M. and Chateau, T. (2005). Localization in urban environments: Monocular vision compared to a differential GPS sensor, IEEE Conference on Computer Vision and Pattern Recognition, San Diego, USA, Vol. 2, pp. 114–121.
]Search in Google Scholar
[
Schubert, E., Sander, J., Ester, M., Kriegel, H.P. and Xu, X. (2017). DBSCAN revisited: Why and how you should (still) use DBSCAN, ACM Transactions on Database Systems 42(3): 1–21.10.1145/3068335
]Search in Google Scholar
[
Schunk Carbon Technology (2021). Schunk smart charging, https://www.schunk-carbontechnology.com/en/smart-charging.
]Search in Google Scholar
[
Skrzypczyński, P. (2009). Simultaneous localization and mapping: A feature-based probabilistic approach, International Journal of Applied Mathematics and Computer Science 19(4): 575–588, DOI: 10.2478/v10006-009-0045-z.
]Ouvrir le DOISearch in Google Scholar
[
Taghibakhshi, A., Ogden, N. and West, M. (2021). Local navigation and docking of an autonomous robot mower using reinforcement learning and computer vision, 2021 13th International Conference on Computer and Automation Engineering (ICCAE), Bruxelles, Belgium, pp. 10–14.
]Search in Google Scholar
[
Toshpulatov, M., Lee, W., Lee, S. and Haghighian Roudsari, A. (2022). Human pose, hand and mesh estimation using deep learning: A survey, The Journal of Supercomputing 78(6): 7616–7654.10.1007/s11227-021-04184-7
]Search in Google Scholar
[
Triggs, B., McLauchlan, P.F., Hartley, R.I. and Fitzgibbon, A.W. (2000). Bundle adjustment—A modern synthesis, in B. Triggs et al. (Eds), Vision Algorithms: Theory and Practice, Springer, Berlin, pp. 298–372.10.1007/3-540-44480-7_21
]Search in Google Scholar
[
u-blox (2020). ZED-F9P: u-blox F9 high precision GNSS module, https://content.u-blox.com/sites/default/files/ZED-F9P-04B_DataSheet_UBX-21044850.pdf.
]Search in Google Scholar
[
Vivacqua, R., Vassallo, R. and Martins, F. (2017). A low cost sensors approach for accurate vehicle localization and autonomous driving application, Sensors 17(10), Article no. 2359.
]Search in Google Scholar
[
Wang, J. and Olson, E. (2016). AprilTag 2: Efficient and robust fiducial detection, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, pp. 4193–4198.
]Search in Google Scholar
[
Wang, J., Sun, K., Cheng, T., Jiang, B., Deng, C., Zhao, Y., Liu, D., Mu, Y., Tan, M., Wang, X., Liu, W. and Xiao, B. (2021). Deep high-resolution representation learning for visual recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence 43(10): 3349–3364.10.1109/TPAMI.2020.298368632248092
]Search in Google Scholar
[
Xiang, Y., Schmidt, T., Narayanan, V. and Fox, D. (2018). PoseCNN: A convolutional neural network for 6D object pose estimation in cluttered scenes, Proceedings of Robotics: Science and Systems, Pittsburgh, USA.10.15607/RSS.2018.XIV.019
]Search in Google Scholar
[
Youjing, C. and Shuzhi, S.G. (2003). Autonomous vehicle positioning with GPS in urban canyon environments, IEEE Transactions on Robotics and Automation 19(1): 15–25.10.1109/TRA.2002.807557
]Search in Google Scholar
[
Zhang, W., Fu, C. and Zhu, M. (2020). Joint object contour points and semantics for instance segmentation, arXiv abs/2008.00460.
]Search in Google Scholar