Otwarty dostęp

The Use of Multi-Feature Fusion in the Evaluation of Emotional Expressions in Spoken English

 oraz   
03 wrz 2024

Zacytuj
Pobierz okładkę

Yu, Y., Han, L., Du, X., & Yu, J. (2022). An oral English evaluation model using artificial intelligence method. Mobile Information Systems, 2022(1), 3998886. Search in Google Scholar

Kang, D., Goico, S., Ghanbari, S., Bennallack, K., Pontes, T., O’Brien, D., & Hargis, J. (2022). Providing an oral examination as an authentic assessment in a large section, undergraduate diversity class. International Journal for the Scholarship of Teaching and Learning, 13(2). Search in Google Scholar

Yoke, S. K., Hasan, N. H., & Ahmad, H. (2024). Educators’ Perspective of Collaborative Assessment in Group Oral Discussion. International Journal of Academic Research in Progressive Education and Development, 13(1). Search in Google Scholar

Block, D., & Mancho-Barés, G. (2020). NOT English teachers, except when they are: The curious case of oral presentation evaluation rubrics in an EMI-in-HE context 1. In The secret life of English-medium instruction in higher education (pp. 96-119). Routledge. Search in Google Scholar

Ounis, A. (2017). The assessment of speaking skills at the tertiary level. International Journal of English Linguistics, 7(4), p95. Search in Google Scholar

Qu, C., & Li, Y. (2022). Oral English Auxiliary Teaching System Based on Deep Learning. Advances in Multimedia, 2022(1), 4109663. Search in Google Scholar

Inkaew, C., & Thumawongsa, N. (2018). A study of English oral communication strategies used among Thai EFL students of different English proficiency levels: A case study of first year English major students, Srinakharinwirot University. Search in Google Scholar

Park, M. S. (2020). Rater effects on L2 oral assessment: focusing on accent familiarity of L2 teachers. Language Assessment Quarterly, 17(3), 231-243. Search in Google Scholar

Xie, J., Zhu, M., & Hu, K. (2023). Fusion-based speech emotion classification using two-stage feature selection. Speech Communication, 152, 102955. Search in Google Scholar

Zhou, H., Du, J., Zhang, Y., Wang, Q., Liu, Q. F., & Lee, C. H. (2021). Information fusion in attention networks using adaptive and multi-level factorized bilinear pooling for audio-visual emotion recognition. IEEE/ACM Transactions on audio, speech, and language processing, 29, 2617-2629. Search in Google Scholar

Nuthakki, P., Katamaneni, M., JN, C. S., Gubbala, K., Domathoti, B., Maddumala, V. R., & Jetti, K. R. (2023). Deep Learning based Multilingual Speech Synthesis using Multi Feature Fusion Methods. ACM Transactions on Asian and Low-Resource Language Information Processing. Search in Google Scholar

Tao, H., Geng, L., Shan, S., Mai, J., & Fu, H. (2022). Multi-stream convolution-recurrent neural networks based on attention mechanism fusion for speech emotion recognition. Entropy, 24(8), 1025. Search in Google Scholar

Sekkate, S., Khalil, M., Adib, A., & Ben Jebara, S. (2019). An investigation of a feature-level fusion for noisy speech emotion recognition. Computers, 8(4), 91. Search in Google Scholar

Ma, Y., & Wang, W. (2022). MSFL: Explainable Multitask-Based Shared Feature Learning for Multilingual Speech Emotion Recognition. Applied Sciences, 12(24), 12805. Search in Google Scholar

Liu, D., Wang, Z., Wang, L., & Chen, L. (2021). Multi-modal fusion emotion recognition method of speech expression based on deep learning. Frontiers in Neurorobotics, 15, 697634. Search in Google Scholar

Cao, Q., Hou, M., Chen, B., Zhang, Z., & Lu, G. (2021, June). Hierarchical network based on the fusion of static and dynamic features for speech emotion recognition. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 6334-6338). IEEE. Search in Google Scholar

Ma, Y., Guo, J., & Fang, L. (2022, October). Speech Emotion Recognition Based on Multi-feature Fusion and DCNN. In Proceedings of the 2022 6th International Conference on Electronic Information Technology and Computer Engineering (pp. 1454-1459). Search in Google Scholar

Zhou, H., Meng, D., Zhang, Y., Peng, X., Du, J., Wang, K., & Qiao, Y. (2019, October). Exploring emotion features and fusion strategies for audio-video emotion recognition. In 2019 International conference on multimodal interaction (pp. 562-566). Search in Google Scholar

Wang, C., Ren, Y., Zhang, N., Cui, F., & Luo, S. (2022). Speech emotion recognition based on multi‐ feature and multi‐lingual fusion. Multimedia Tools and Applications, 81(4), 4897-4907. Search in Google Scholar

Pham, N. T., Phan, L. T., Dang, D. N. M., & Manavalan, B. (2023, December). SER-Fuse: An Emotion Recognition Application Utilizing Multi-Modal, Multi-Lingual, and Multi-Feature Fusion. In Proceedings of the 12th International Symposium on Information and Communication Technology (pp. 870-877). Search in Google Scholar

Jothimani, S., & Premalatha, K. (2022). MFF-SAug: Multi feature fusion with spectrogram augmentation of speech emotion recognition using convolution neural network. Chaos, Solitons & Fractals, 162, 112512. Search in Google Scholar

Hao, M., Cao, W. H., Liu, Z. T., Wu, M., & Xiao, P. (2020). Visual-audio emotion recognition based on multi-task and ensemble learning with multiple features. Neurocomputing, 391, 42-51. Search in Google Scholar

Eriş, F. G., & Akbal, E. (2024). Enhancing speech emotion recognition through deep learning and handcrafted feature fusion. Applied Acoustics, 222, 110070. Search in Google Scholar

Guo, Y., Zhou, Y., Xiong, X., Jiang, X., Tian, H., & Zhang, Q. (2023). A Multi-feature Fusion Speech Emotion Recognition Method Based on Frequency Band Division and Improved Residual Network. IEEE Access. Search in Google Scholar

Li, X. (2021, July). Automatic Evaluation System of Spoken English for Multi Person Dialogue in English Teaching based on Multi Feature Fusion. In 2021 International Conference on Education, Information Management and Service Science (EIMSS) (pp. 269-272). IEEE. Search in Google Scholar

Xuezhen, D. (2023, August). Oral Expression Evaluation Algorithm Based on Multi-Feature Fusion. In 2023 3rd Asian Conference on Innovation in Technology (ASIANCON) (pp. 1-5). IEEE. Search in Google Scholar

Zhengqian Feng,Wei Wang,Wang Li,Gang Li,Min Li & Mingle Zhou.(2024).MFUR-Net: Multimodal feature fusion and unimodal feature refinement for RGB-D salient object detection.Knowledge-Based Systems112022-. Search in Google Scholar

Jing Li,Peng Hu,Huayu Gao,Nanyan Shen & Keqin Hua.(2024).Classification of cervical lesions based on multimodal features fusion.Computers in Biology and Medicine108589-. Search in Google Scholar

Chenwang Sun,Qing Zhang,Chenyu Zhuang & Mingqian Zhang.(2024).BMFNet: Bifurcated multi-modal fusion network for RGB-D salient object detection.Image and Vision Computing105048-. Search in Google Scholar

Bhutto Jameel Ahmed,Guosong Jiang,Rahman Ziaur,Ishfaq Muhammad,Sun Zhengzheng & Soomro Toufique Ahmed.(2024).Feature extraction of multimodal medical image fusion using novel deep learning and contrast enhancement method.Applied Intelligence(7),5907-5930. Search in Google Scholar

Język:
Angielski
Częstotliwość wydawania:
1 razy w roku
Dziedziny czasopisma:
Nauki biologiczne, Nauki biologiczne, inne, Matematyka, Matematyka stosowana, Matematyka ogólna, Fizyka, Fizyka, inne