Electronics, Vol. 13, Pages 4410: Multi-Head Attention Affinity Diversity Sharing Network for Facial Expression Recognition
Electronics doi: 10.3390/electronics13224410
Authors: Caixia Zheng Jiayu Liu Wei Zhao Yingying Ge Wenhe Chen
Facial expressions exhibit inherent similarities, variability, and complexity. In real-world scenarios, challenges such as partial occlusions, illumination changes, and individual differences further complicate the task of facial expression recognition (FER). To further improve the accuracy of FER, a Multi-head Attention Affinity and Diversity Sharing Network (MAADS) is proposed in this paper. MAADS comprises a Feature Discrimination Network (FDN), an Attention Distraction Network (ADN), and a Shared Fusion Network (SFN). To be specific, FDN first integrates attention weights into the objective function to capture the most discriminative features by using the proposed sparse affinity loss. Then, ADN employs multiple parallel attention networks to maximize diversity within spatial attention units and channel attention units, which guides the network to focus on distinct, non-overlapping facial regions. Finally, SFN deconstructs facial features into generic parts and unique parts, which allows the network to learn the distinctions between these features without having to relearn complete features from scratch. To validate the effectiveness of the proposed method, extensive experiments were conducted on several widely used in-the-wild datasets including RAF-DB, AffectNet-7, AffectNet-8, FERPlus, and SFEW. MAADS achieves the accuracy of 92.93%, 67.14%, 64.55%, 91.58%, and 62.41% on these datasets, respectively. The experimental results indicate that MAADS not only outperforms current state-of-the-art methods in recognition accuracy but also has a relatively low computational complexity.