Cross-Dataset Facial Expression Recognition based on Arousal-Valence Emotion Model and Transfer Learning Method
- DOI
- 10.2991/mecae-17.2017.24How to use a DOI?
- Keywords
- Facial expression recognition; Arousal-valence emotion dimensions; TPCA; Fusion.
- Abstract
Traditional facial expression recognition methods assume that facial expression in the training and testing sets are collected under the same condition such that they are independent and identically distributed. However, the assumption is not satisfied in many real applications. This problem is referred to as cross-dataset facial expression recognition. On the other hand, the traditional facial expression recognition methods are based on basic emotion theory proposed by Ekman. Unfortunately, the theory is limited to express diverse and subtle emotion. To solve the problem of the cross-dataset facial expression recognition and enrich the emotion expression, a transfer learning algorithm TPCA and arousal-valence emotional model are adopted in this paper. A new facial emotion recognition method based on TPCA and two-level fusion is proposed, which combine weight fusion and correlation fusion between arousal and valence to improve the recognition performance under cross-dataset scenarios. The contrast experimental results show that the proposed method can get better recognition result than the traditional methods.
- Copyright
- © 2017, the Authors. Published by Atlantis Press.
- Open Access
- This is an open access article distributed under the CC BY-NC license (http://creativecommons.org/licenses/by-nc/4.0/).
Cite this article
TY - CONF AU - Yong Yang AU - Chuan Liu AU - Qingshan Wu PY - 2017/03 DA - 2017/03 TI - Cross-Dataset Facial Expression Recognition based on Arousal-Valence Emotion Model and Transfer Learning Method BT - Proceedings of the 2017 International Conference on Mechanical, Electronic, Control and Automation Engineering (MECAE 2017) PB - Atlantis Press SP - 132 EP - 138 SN - 2352-5401 UR - https://doi.org/10.2991/mecae-17.2017.24 DO - 10.2991/mecae-17.2017.24 ID - Yang2017/03 ER -