Acceso abierto

Interpreting Convolutional Layers in DNN Model Based on Time–Frequency Representation of Emotional Speech


Cite

The paper describes the relations of speech signal representation in the layers of the convolutional neural network. Using activation maps determined by the Grad-CAM algorithm, energy distribution in the time–frequency space and their relationship with prosodic properties of the considered emotional utterances have been analysed. After preliminary experiments with the expressive speech classification task, we have selected the CQT-96 time–frequency representation. Also, we have used a custom CNN architecture with three convolutional layers in the main experimental phase of the study. Based on the performed analysis, we show the relationship between activation levels and changes in the voiced parts of the fundamental frequency trajectories. As a result, the relationships between the individual activation maps, energy distribution, and fundamental frequency trajectories for six emotional states were described. The results show that the convolutional neural network in the learning process uses similar fragments from time–frequency representation, which are also related to the prosodic properties of emotional speech utterances. We also analysed the relations of the obtained activation maps with time-domain envelopes. It allowed observing the importance of the speech signals energy in classifying individual emotional states. Finally, we compared the energy distribution of the CQT representation in relation to the regions’ energy overlapping with masks of individual emotional states. In the result, we obtained information on the variability of energy distributions in the selected signal representation speech for particular emotions.

eISSN:
2449-6499
Idioma:
Inglés
Calendario de la edición:
4 veces al año
Temas de la revista:
Computer Sciences, Databases and Data Mining, Artificial Intelligence