Authors:
Arpita Gupta, Nandhini Swaminathan,Ramadoss Balakrishnan,DOI NO:
https://doi.org/10.26782/jmcms.2020.07.00029Keywords:
Facial Emotion Recognition, Deep Learning,Deep Residual Networks,Transfer Learning, Inductive Learning, Self-Attention,Abstract
The image-based Facial Emotion Recognition (FER) aims to classify the image into basic emotions being communicated by it. FER is one of the most prominent research areas in computer vision. Most of the existing works are aimed at high-quality images which are collected in the lab environment. These images are very different from the real-life facial emotion that leads to a lack of wild labeled data. Deep learning using transfer learning has shown promising results in computer vision in solving the problem of lack of labeled data. In the recent system, there is a great focus to overcome the lack of data issue in FER. Our paper has utilized the deep residual networks with inductive learning and self-attention module to overcome this problem. We have experimented different pretraining settings and datasets for the model, which are ImageNet and VGG face dataset (source datasets). The self-attention block is applied for better visual perspective to the model. Our target dataset is FER-2013, a benchmark dataset in FER. TransFER is a deep residual network based on inductive learning and attention module. Our proposed approach has achieved superior performance than the existing state of the art models in the FER application using transfer learning.Refference:
I Deng, Jia, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. “Imagenet: A large-scale hierarchical image database.” In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. Ieee, 2009.
II Devries, Terrance, Kumar Biswaranjan, and Graham W. Taylor. “Multi-task learning of facial landmarks and expression.” In 2014 Canadian Conference on Computer and Robot Vision, pp. 98-103. IEEE, 2014.
III Ekman, Paul, and Wallace V. Friesen. “Constants across cultures in the face and emotion.” Journal of personality and social psychology 17, no. 2 (1971): 124.
IV Geng, Mengyue, Yaowei Wang, Tao Xiang, and Yonghong Tian. “Deep transfer learning for person re-identification.” arXiv preprint arXiv:1611.05244 (2016).
V Goodfellow, Ian J., Dumitru Erhan, Pierre Luc Carrier, Aaron Courville, Mehdi Mirza, Ben Hamner, Will Cukierski et al. “Challenges in representation learning: A report on three machine learning contests.” In International Conference on Neural Information Processing, pp. 117-124. Springer, Berlin, Heidelberg, 2013.
VI He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. “Deep residual learning for image recognition.” In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778. 2016.
VII Li, Shan, and Weihong Deng. “Deep emotion transfer network for cross-database facial expression recognition.” In 2018 24th International Conference on Pattern Recognition (ICPR), pp. 3092-3099. IEEE, 2018.
VIII Liu, Mengyi, Shaoxin Li, Shiguang Shan, and Xilin Chen. “Au-aware deep networks for facial expression recognition.” In 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), pp. 1-6. IEEE, 2013.
IX Miao, Yun-Qian, Rodrigo Araujo, and Mohamed S. Kamel. “Cross-domain facial expression recognition using supervised kernel mean matching.” In 2012 11th International Conference on Machine Learning and Applications, vol. 2, pp. 326-332. IEEE, 2012.
X Mollahosseini, Ali, David Chan, and Mohammad H. Mahoor. “Going deeper into facial expression recognition using deep neural networks.” In 2016 IEEE Winter conference on applications of computer vision (WACV), pp. 1-10. IEEE, 2016.
XI Ng, Hong-Wei, Viet Dung Nguyen, Vassilios Vonikakis, and Stefan Winkler. “Deep learning for emotion recognition on small datasets using transfer learning.” In Proceedings of the 2015 ACM on international conference on multimodal interaction, pp. 443-449. 2015.
XII Ouellet, Sébastien. “Real-time emotion recognition for gaming using deep convolutional network features.” arXiv preprint arXiv:1408.3750 (2014).
XIII Parkhi, Omkar M., Andrea Vedaldi, and Andrew Zisserman. “Deep face recognition.” (2015).
XIV Sandbach, Georgia, Stefanos Zafeiriou, Maja Pantic, and Daniel Rueckert. “Recognition of 3D facial expression dynamics.” Image and Vision Computing 30, no. 10 (2012): 762-773.
XV Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and IlliaPolosukhin. “Attention is all you need.” In Advances in neural information processing systems, pp. 5998-6008. 2017.
XVI Xu, Mao, Wei Cheng, Qian Zhao, Li Ma, and Fang Xu. “Facial expression recognition based on transfer learning from deep convolutional networks.” In 2015 11th International Conference on Natural Computation (ICNC), pp. 702-708. IEEE, 2015.
XVII Yan, Haibin, Marcelo H. Ang, and AunNeow Poo. “Cross-dataset facial expression recognition.” In 2011 IEEE International Conference on Robotics and Automation, pp. 5985-5990. IEEE, 2011.
XVIII Zhang, Zhanpeng, Ping Luo, Chen-Change Loy, and Xiaoou Tang. “Learning social relation traits from face images.” In Proceedings of the IEEE International Conference on Computer Vision, pp. 3631-3639. 2015.
View Download