Otwarty dostęp

Human action recognition using descriptor based on selective finite element analysis


Zacytuj

[1] A. F. Bobick and J. W. Davis, “The recognition of human movement using temporal templates, IEEE Transactions on Pattern Analysis Machine Intelligence, vol. 23, no. 3, pp. 257-267, 2001.10.1109/34.910878Search in Google Scholar

[2] R. Souvenir and J. Babbs, “Learning the viewpoint manifold for action recognition, IEEE International Conference on Computer Vision Pattern Recognition (CVPR’08), pp. 1-7, 2008.10.1109/CVPR.2008.4587552Search in Google Scholar

[3] M. Blank, L. Gorelick, E. Shechtman, M. Irani, and R. Basri, “Action as space-time shapes, IEEE International Conference on Computer Vision (ICCV’05), vol. 2, pp. 1395-1402, 2005.10.1109/ICCV.2005.28Search in Google Scholar

[4] M. Blank, L. Gorelick, E. Shechtman, M. Irani, and R. Basri, “Action as space-time shapes, IEEE Transaction on Pattern Analysis Machine Intelligence, vol. 29, no. 12, pp. 2247-2253, 2007.10.1109/TPAMI.2007.70711Search in Google Scholar

[5] K. Guo, P. Ishwa, and J. Konrad, “Action recognition from video using feature covariance matrices, IEEE Transaction on Image Processing, vol. 22, no. 6, pp. 2479-2494, 2013.10.1109/TIP.2013.2252622Search in Google Scholar

[6] Y. Chen, Z. Li, X. Guo, Y. Zhao, and A. Cai, “A spatio-temporal interest point detector based on vorticity for action recognition, IEEE International Conference on Multimedia Expo Workshop, pp. 1-6, 2013.Search in Google Scholar

[7] M. Laptev, C. Marszalek, and B. Schmid, “Learning realistic human actions from movies, IEEE Conference on Computer Vision Pattern Recognition, pp. 1-8, 2008.10.1109/CVPR.2008.4587756Search in Google Scholar

[8] S. Savarese, A. Delpozo, J. C. Niebles, and L. Fei-fei, “Spatial-temporal correlations for unsupervised action classification, Proceedings, of the IEEE Workshop on Motion Video Computing, pp. 1-8, 2008.10.1109/WMVC.2008.4544068Search in Google Scholar

[9] M. S. Ryoo and J. K. Aggarwal, “Spatio-temporal relationship match: Video structure comparison for recognition of complex human activities, IEEE 12th International Conference on Computer Vision, pp. 1593-1600, 2009.10.1109/ICCV.2009.5459361Search in Google Scholar

[10] I. Laptev and T. Lindeberg “Space-time interest points, Proceedings Ninth IEEE International Conference on Computer Vision, pp. 432-439, 2003.10.1109/ICCV.2003.1238378Search in Google Scholar

[11] A. Klaser, M. Marszalek and C. Schmid, “A spatio-temporal descriptor based on 3D-gradients, Proceedings of British Machine Vision Conference, pp. 995-1004, 2008.10.5244/C.22.99Search in Google Scholar

[12] G. Willems, T. Tuytelaars, and L. Van Gool, “An efficient dense scale-invariant spatio-temporal interest point detector, ECCV 5303, pp. 650-663, 2008.10.1007/978-3-540-88688-4_48Search in Google Scholar

[13] M. Chen and A. Hauptmann, “MoSIFT: Recognizing human actions in surveillance videos, CMU-CS-09-161 2009,.Search in Google Scholar

[14] N. Ballas, L. Yao, C. Pal, and A. Courville, “Delving deeper into convolutional networks for learning video representations, International Conference on Learning Representations 2016,.Search in Google Scholar

[15] L. Wang, Y. Qiao, and X. Tang, “Action recognition with trajectory pooled deep-convolutional descriptors, IEEE Conference on Computer Vision Pattern Recognition, pp. 4305-4314, 2015.10.1109/CVPR.2015.7299059Search in Google Scholar

[16] L. Sun, K. Jia, D. Yeung, and B. E. Shi, “Human action recognition using factorized spatio-temporal convolutional networks, IEEE International Conference on Computer Vision (ICCV), pp. 4597-4605, 2015.10.1109/ICCV.2015.522Search in Google Scholar

[17] D. K. Vishwakarma and K. Singh, “Human activity recognition based on the spatial distribution of gradients at sub-levels of average energy silhouette images, IEEE Transactions on Cognitive Development Systems, vol. 9, no. 4, pp. 316-327, 2017.10.1109/TCDS.2016.2577044Search in Google Scholar

[18] D. K. Vishwakarma and R. Kapoor, “Hybrid classifier based human activity recognition using the silhouettes ands cells, Expert Systems with Applications, vol. 42, no. 20, pp. 6957-6965, 2015.10.1016/j.eswa.2015.04.039Search in Google Scholar

[19] D. Wu and L. Shao, “Silhouette analysis-based action recognition via exploiting human poses, IEEE Transactions on Circuits Systems for Video Technology, vol. 23, no. 2, pp. 236-243, 2013.10.1109/TCSVT.2012.2203731Search in Google Scholar

[20] D. Weinland, M. Ozuysal, and P. Fua, “Making action recognition robust to occlusions viewpoint changes,” European Conference on Computer Vision (ECCV), pp. 635-648, 2010.10.1007/978-3-642-15558-1_46Search in Google Scholar

[21] B. Saghafi and D. Rajan, “Human action recognition using Pose-based discriminant embedding, Signal Processing: Image Communication, vol. 27, no. 1, pp. 96-111, 2012.10.1016/j.image.2011.05.002Search in Google Scholar

[22] A. A. Chaaraoui, P. C. Pérez, and F. Florez-Revuelta, “Silhouette-based human action recognition using sequences of key poses, Pattern Recognition Letters, vol. 34, no. 15, pp. 1799-1807, 2013.10.1016/j.patrec.2013.01.021Search in Google Scholar

[23] G. Goudelis, K. Karpouzis, and S. Kollias, “Exploring trace transform for robust human action recognition, Pattern Recognition, vol. 46, no. 12, pp. 3238-3248, 2013.10.1016/j.patcog.2013.06.006Search in Google Scholar

[24] R. Touati and M. Mignotte, “MDS-based multi-axial dimensionality reduction model for human action recognition, Canadian Conference on Computer Robot Vision, pp. 262-267, 2014.10.1109/CRV.2014.42Search in Google Scholar

[25] H. Han and X. J. Li, “Human action recognition with sparse geometric features, The Imaging Science Journal, vol. 63, no. 1, pp. 45-53, 2015.10.1179/1743131X14Y.0000000091Search in Google Scholar

[26] Y. Fu, T. Zhang, and W. Wang, “Sparse coding-based space-time video representation for action recognition, Multimedia Tools Applications, vol. 76, no. 10, pp. 12645-12658, 2017.10.1007/s11042-016-3630-9Search in Google Scholar

[27] J. Lei, G. Li, J. Zhang, Q. Guo, and D. Tu, “Continuous action segmentation recognition using hybrid convolutional neural network-hidden Markov model, IET Computer Vision, vol. 10, no. 6, pp. 537-544, 2016.10.1049/iet-cvi.2015.0408Search in Google Scholar

[28] H. Liu, N. Shu, Q. Tang, and W. Zhang, “Computational model based on the neural network of visual cortex for human action recognition, IEEE Transactions on Neural Networks Learning Systems, vol. 29, no. 5, pp. 1427-1440, 2017.10.1109/TNNLS.2017.2669522Search in Google Scholar

[29] Y. Shi, Y. Tian, Y. Wang, and T. Huang, “Sequential deep trajectory descriptor for action recognition with threestream CNN, IEEE Transactions on Multimedia, vol. 19, no. 7, pp. 1510-1520, 2017.10.1109/TMM.2017.2666540Search in Google Scholar

[30] 2D Triangular Elements, The University of New Mexico, http://www.unm.edu/bgreen/ME360/2D%20Triangular%20Elements.pdf. Accessed 24 February 2010,.Search in Google Scholar

[31] D. K. Jha, T. Kant, and R. K. Singh, “An accurate two dimensional theory for deformation stress analysis of functionally graded thick plates, International Journal of Advanced Structural Engineering, pp. 6-7, 2014.10.1007/s40091-014-0062-5Search in Google Scholar

[32] J. Dou and J. Li, “Robust human action recognition based on spatiotemporal descriptors motion temporal templates, Optik, vol. 125, no. 7, pp. 1891-1896, 2014.10.1016/j.ijleo.2013.10.022Search in Google Scholar

[33] Q. Song, W. Hu, and X. Wenfang, “Robust support vector machine for bullet hole image classification, IEEE Transaction on Systems Man Cybernetics,, vol. 32no. pp. 440-448, 2002.10.1109/TSMCC.2002.807277Search in Google Scholar

[34] S. S. Keerthi C.-J. Lin, “Asymptotic Behaviors of Support Vector Machines with Gaussian Kernel, Neural Computation vol, 15, no, 7,, pp. 1667-1689, 2003.10.1162/089976603321891855Search in Google Scholar

[35] C. Schuldt, I. Laptev, and B. Caputo, “R, ognizing human actions: a local SVM approach, Proceedings of the 17th International Conference on Pattern Recognition Cambridge, UK, 2004,.10.1109/ICPR.2004.1334462Search in Google Scholar

[36] T. Guha and R. K. Ward, “Learning sparse representations for human action recognition, IEEE Transaction on Pattern Analysis Machine Intelligence, vol. 34, no. 8, pp. 1576-1588, 2012.10.1109/TPAMI.2011.253Search in Google Scholar

[37] D. Weinland, R. Ronfard, and E. Boyer, “Free viewpoint action recognition using motion history vol. s, Computer Vision Image Understanding, vol. 104, no. 2-3, pp. 249-257, 2006.10.1016/j.cviu.2006.07.013Search in Google Scholar

[38] S. A. Rahman, I. Song, M. K. H. Leung, I. Lee, and K. Lee, “Fast action recognition using negative space features, Expert Systems Applications, vol. 41, no. 2, pp. 574-587, 2014.10.1016/j.eswa.2013.07.082Search in Google Scholar

[39] I. Gomez-Conde and D. N. Olivieri, “A KPCA spatio-temporal differential geometric trajectory cloud classifier for recognizing human actions in a CBVR system, Expert Systems Applications, vol. 42, no. 13, pp. 5472-5490, 2015.10.1016/j.eswa.2015.03.010Search in Google Scholar

[40] L. Juan and O. Gwun, “A comparison of SIFT, PCA-SIFT and SURF, International Journal of Image Processing, vol. 3, no. 4, pp. 143-152, 2009.Search in Google Scholar

[41] Y. Wang and G. Mori, “Human action recognition using semilatent topic models, IEEE Transactions on Pattern Analysis Machine Intelligence, vol. 31, no. 10, pp. 1762-1764, 2009.10.1109/TPAMI.2009.43Search in Google Scholar

[42] L.-M. Xia J.-X. Huang, and L.-Z. Tan, “Human action recognition based on chaotic invariants, Journal of Central University, vol. 20, no. 11, pp. 3171-3179, 2014.10.1007/s11771-013-1841-zSearch in Google Scholar

[43] A. Iosifidis A Tefas and I. Pitas, Discriminant bag of words based representation for human action recognition, Pattern Recognition Letters, vol. 49, no. 1, pp. 185-192, 2014.10.1016/j.patrec.2014.07.011Search in Google Scholar

[44] X. Wu, D. Xu, L. Duan, and J. Luo, “Action recognition using context appearance distribution features, IEEE Conference on Computer Vision Pattern Recognition (CVPR), pp. 489-496, 2011.10.1109/CVPR.2011.5995624Search in Google Scholar

[45] D.Weinland, M. Özuysal, and P. Fu, “Making action recognition robust to occlusions viewpoint changes”, European Conference on Computer Vision (ECCV), pp. 635-648, 2010.10.1007/978-3-642-15558-1_46Search in Google Scholar

[46] E.-A, Mosabbeb, K. Raahemifar, and M. Fathy, “Multi-view human activity recognition in distributed camera sensor networks, Sensors, vol. 13, no. 7, pp. 8750-8770, 2013.10.3390/s130708750Search in Google Scholar

[47] J. Wang, H. Zheng, J. Gao, and J. Cen, “Cross-view action recognition based on a statistical translation framework, IEEE Transactions on Circuits Systems for Video Technology, vol. 26, no. 8, pp. 1461-1475, 2016.10.1109/TCSVT.2014.2382984Search in Google Scholar

eISSN:
1339-309X
Język:
Angielski
Częstotliwość wydawania:
6 razy w roku
Dziedziny czasopisma:
Engineering, Introductions and Overviews, other