Accesso libero

Emotional analysis and semantic understanding of multimodal network language data

  
31 mar 2025
INFORMAZIONI SU QUESTO ARTICOLO

Cita
Scarica la copertina

Figure 1.

Overall framework of model
Overall framework of model

Figure 2.

Extracted MFCC features
Extracted MFCC features

Figure 3.

Transformer model structure
Transformer model structure

Figure 4.

Schematic diagram of ResNet residual block
Schematic diagram of ResNet residual block

Figure 5.

Multi-head attention feature fusion mechanism
Multi-head attention feature fusion mechanism

Figure 6.

Performance comparison of different models in emotional analysis tasks
Performance comparison of different models in emotional analysis tasks

Figure 7.

Confusion matrix of model on data set
Confusion matrix of model on data set

Figure 8.

Semantic tags identified by the model on the data set
Semantic tags identified by the model on the data set

Performance of the model in semantic tag recognition

index value
precision 79.3
Recall rate 76.8
F1 value 78.0

Overall performance of different model in affective analysis task

types of models Accuracy Recall F1 values
Text only 78.6% 77.8% 78.2%
Image only 80.5% 83.2% 81.8%
Audio only 82.1% 81.4% 81.7%
Multimodal fusion 85.2% 86.7% 85.0%
Lingua:
Inglese
Frequenza di pubblicazione:
1 volte all'anno
Argomenti della rivista:
Scienze biologiche, Scienze della vita, altro, Matematica, Matematica applicata, Matematica generale, Fisica, Fisica, altro