Emotional analysis and semantic understanding of multimodal network language data
31 mar 2025
INFORMAZIONI SU QUESTO ARTICOLO
Pubblicato online: 31 mar 2025
Ricevuto: 05 nov 2024
Accettato: 13 feb 2025
DOI: https://doi.org/10.2478/amns-2025-0818
Parole chiave
© 2025 Chen Weimiao, published by Sciendo
This work is licensed under the Creative Commons Attribution 4.0 International License.
Figure 1.

Figure 2.

Figure 3.

Figure 4.

Figure 5.

Figure 6.

Figure 7.

Figure 8.

Performance of the model in semantic tag recognition
index | value |
---|---|
precision | 79.3 |
Recall rate | 76.8 |
F1 value | 78.0 |
Overall performance of different model in affective analysis task
types of models | Accuracy | Recall | F1 values |
---|---|---|---|
Text only | 78.6% | 77.8% | 78.2% |
Image only | 80.5% | 83.2% | 81.8% |
Audio only | 82.1% | 81.4% | 81.7% |
Multimodal fusion | 85.2% | 86.7% | 85.0% |