Acceso abierto

Multi-Stage Recognition of Speech Emotion Using Sequential Forward Feature Selection


Cite

[1] S. Ramakrishnan and I. M. M. El Emary, “Speech emotion recognition approaches in human computer interaction,” Telecommun. Systems, vol. 52, issue 3, pp. 1467–1478, Mar. 2013. https://doi.org/10.1007/s11235-011-9624-z10.1007/s11235-011-9624-zSearch in Google Scholar

[2] S. G. Koolagudi and K. S. Rao, “Emotion recognition from speech: a review,” Int. J. of Speech Technology, vol. 15, issue 2, pp. 99–117, June 2012. https://doi.org/10.1007/s10772-011-9125-110.1007/s10772-011-9125-1Search in Google Scholar

[3] Z. Xiao, E. Dellandrea, L. Chen and W. Dou, “Recognition of emotions in speech by a hierarchical approach,” in 2009 3rd Int. Conf. on Affective Computing and Intelligent Interaction and Workshops, Amsterdam, 2009, pp. 1–8. https://doi.org/10.1109/acii.2009.534958710.1109/ACII.2009.5349587Search in Google Scholar

[4] P. Giannoulis and G. Potamianos, “A hierarchical approach with feature selection for emotion recognition from speech,” in Proc. of the Eighth Int. Conf. on Language Resources and Evaluation, 2012, pp. 1203–1206.Search in Google Scholar

[5] B. Schuller, B. Vlasenko, F. Eyben, G. Rigoll and A. Wendemuth, “Acoustic Emotion Recognition: A Benchmark Comparison of Performances,” in 2009 IEEE Workshop on Automatic Speech Recognition & Understanding, Merano, 2009, pp. 552–557. https://doi.org/10.1109/asru.2009.537288610.1109/ASRU.2009.5372886Search in Google Scholar

[6] A. Origlia, V. Galatà and B. Ludusan, “Automatic classification of emotions via global and local prosodic features on a multilingual emotional database,” in Proc. of Speech Prosody, 2010.Search in Google Scholar

[7] M. Lugger, M.-E. Janoir and B. Yang, “Combining classifiers with diverse feature sets for robust speaker independent emotion recognition,” in 2009 17th European Signal Processing Conf., Glasgow, 2009, pp. 1225–1229.Search in Google Scholar

[8] H. Peng, F. Long and C. Ding, “Feature selection based on mutual information: criteria of max-dependency, max-relevance, and min-redundancy,” IEEE Trans. on Pattern Analysis and Machine Intelligence, pp. 1226–1238, Aug. 2005. https://doi.org/10.1109/TPAMI.2005.15910.1109/TPAMI.2005.15916119262Search in Google Scholar

[9] A. Mencattini, E. Martinelli, G. Costantini, M. Todisco, B. Basile, M. Bozzali and N. Di Corrado, “Speech emotion recognition using amplitude modulation parameters and a combined feature selection procedure,” Knowledge-Based Systems, vol. 63, pp. 68–81, June 2014. https://doi.org/10.1016/j.knosys.2014.03.01910.1016/j.knosys.2014.03.019Search in Google Scholar

[10] A. Milton and S. Tamil Selvi, “Class-specific multiple classifiers scheme to recognize emotions from speech signals,” Comput. Speech and Language, vol. 28, issue 3, pp. 727–742, May 2014. https://doi.org/10.1016/j.csl.2013.08.00410.1016/j.csl.2013.08.004Search in Google Scholar

[11] L. Chen, X. Mao, Y. Xue and L. L. Cheng, “Speech emotion recognition: Features and classification models,” Digital Signal Processing, pp. 1154–1160, Dec. 2012. https://doi.org/10.1016/j.dsp.2012.05.00710.1016/j.dsp.2012.05.007Search in Google Scholar

[12] W.-J. Yoon and K.-S. Park, “Building robust emotion recognition system on heterogeneous speech databases,” in 2011 IEEE Int. Conf. on Consumer Electronics (ICCE), Las Vegas, NV, 2011, pp. 825–826. https://doi.org/10.1109/ICCE.2011.572288610.1109/ICCE.2011.5722886Search in Google Scholar

[13] J. Liu, C. Chen, J. Bu, M. You and J. Tao, “Speech Emotion Recognition using an Enhanced Co-Training Algorithm,” in 2007 IEEE Int. Conf. on Multimedia and Expo, Beijing, 2007, pp. 999–1002. https://doi.org/10.1109/ICME.2007.428482110.1109/ICME.2007.4284821Search in Google Scholar

[14] M. Kotti and F. Paternò, “Speaker-independent emotion recognition exploiting a psychologically-inspired binary cascade classification schema,” Int. J. of Speech Technology, vol. 15, issue 2, pp. 131–150, June 2012. https://doi.org/10.1007/s10772-012-9127-710.1007/s10772-012-9127-7Search in Google Scholar

[15] G. Tamulevicius and T. Liogiene, “Low-order multi-level features for speech emotion recognition,” Baltic J. of Modern Computing, vol. 3, no. 4, pp. 234–247, 2015.Search in Google Scholar

[16] T. Liogiene and G. Tamulevicius, “Minimal cross-correlation criterion for speech emotion multi-level feature selection,” in Proc. of the Open Conf. of Electrical, Electronic and Information Sciences (eStream), Vilnius, 2015, pp. 1–4. https://doi.org/10.1109/estream.2015.711949210.1109/eStream.2015.7119492Search in Google Scholar

[17] F. Burkhardt, A. Paeschke, M. Rolfes, W. Sendlmeier and B. Weiss, “A database of German emotional speech,” in Proc. of Interspeech, Lissabon, 2005, pp. 1517–1520.10.21437/Interspeech.2005-446Search in Google Scholar

[18] J. Matuzas, T. Tišina, G. Drabavičius and L. Markevičiūtė, “Lithuanian Spoken Language Emotions Database,” Baltic Institute of Advanced Language, 2015. [Online]. Available: http://datasets.bpti.lt/lithuanian-spoken-language-emotions-database/Search in Google Scholar

[19] F. Eyben, M. Wollmer and B. Schuller, “OpenEAR – Introducing the munich open-source emotion and affect recognition toolkit,” in 2009 3rd Int. Conf. on Affective Computing and Intelligent Interaction and Workshops, Amsterdam, 2009, pp. 1–6. https://doi.org/10.1109/acii.2009.534935010.1109/ACII.2009.5349350Search in Google Scholar

eISSN:
2255-9159
Idioma:
Inglés
Calendario de la edición:
2 veces al año
Temas de la revista:
Engineering, Introductions and Overviews, other