Evidence-Based Research on Multimodal Fusion Emotion Recognition
- DOI
- 10.2991/978-94-6463-200-2_61How to use a DOI?
- Keywords
- Multimodal fusion; uncertainty; D-S evidence theory; MELD
- Abstract
Multimodal fusion classifications are more generalizable and may be utilized in a variety of domains, including medical care, automotive autopilot, and in this paper's study of sentiment identification. This study is motivated by the human perception technique for emotion; it merges the information from auditory and visual modalities to create a novel multimodal fusion emotion algorithm; and it conducts tests to confirm the algorithm's stability. The uncertainty is used as fuzzy propositions for further decision fusion by quantitative calculation of uncertainty, and a credible identification choice is ultimately generated by merging D-S evidence theory. The suggested fusion approach achieves 81.25 percent identification accuracy on the MELD dataset.
- Copyright
- © 2023 The Author(s)
- Open Access
- Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
Cite this article
TY - CONF AU - Zhiqiang Huang AU - Mingchao Liao PY - 2023 DA - 2023/07/26 TI - Evidence-Based Research on Multimodal Fusion Emotion Recognition BT - Proceedings of the 2023 3rd International Conference on Public Management and Intelligent Society (PMIS 2023) PB - Atlantis Press SP - 594 EP - 601 SN - 2589-4919 UR - https://doi.org/10.2991/978-94-6463-200-2_61 DO - 10.2991/978-94-6463-200-2_61 ID - Huang2023 ER -