Please wait a minute...
Frontiers of Information Technology & Electronic Engineering

ISSN 2095-9184

Frontiers of Information Technology & Electronic Engineering  2015, Vol. 16 Issue (4): 272-282   https://doi.org/10.1631/FITEE.1400209
  本期目录
UsingKinect for real-time emotion recognition via facial expressions
Qi-rong MAO(), Xin-yu PAN(), Yong-zhao ZHAN, Xiang-jun SHEN
Department of Computer Science and Communication Engineering, Jiangsu University, Zhenjiang 212013, China
 全文: PDF(1151 KB)  
Abstract

Emotion recognition via facial expressions (ERFE) has attracted a great deal of interest with recent advances in artificial intelligence and pattern recognition. Most studies are based on 2D images, and their performance is usually computationally expensive. In this paper, we propose a real-time emotion recognition approach based on both 2D and 3D facial expression features captured by Kinect sensors. To capture the deformation of the 3D mesh during facial expression, we combine the features of animation units (AUs) and feature point positions (FPPs) tracked by Kinect. A fusion algorithm based on improved emotional profiles (IEPs) and maximum confidence is proposed to recognize emotions with these real-time facial expression features. Experiments on both an emotion dataset and a real-time video show the superior performance of our method.

Key wordsKinect    Emotion recognition    Facial expression    Real-time classification    Fusion algorithm    Support vector machine (SVM)
收稿日期: 2014-06-12      出版日期: 2015-04-14
Corresponding Author(s): Qi-rong MAO,Xin-yu PAN   
 引用本文:   
. [J]. Frontiers of Information Technology & Electronic Engineering, 2015, 16(4): 272-282.
Qi-rong MAO, Xin-yu PAN, Yong-zhao ZHAN, Xiang-jun SHEN. UsingKinect for real-time emotion recognition via facial expressions. Front. Inform. Technol. Electron. Eng, 2015, 16(4): 272-282.
 链接本文:  
https://academic.hep.com.cn/fitee/CN/10.1631/FITEE.1400209
https://academic.hep.com.cn/fitee/CN/Y2015/V16/I4/272
1 J. Ahlberg,, 2001. Candide-3—an Updated Parameterised Face. Technical Report.
2 M. Breidt,, H.H. Biilthoff,, C. Curio,, 2011. Robust semantic analysis by synthesis of 3D facial motion. Proc. IEEE Int. Conf. on Automatic Face & Gesture Recognition and Workshops, p.713−719. []
https://doi.org/10.1109/FG.2011.5771336
3 C. Cao,, Y.L. Weng,, S. Zhou,, et al., 2014. FaceWarehouse: a 3D facial expression database for visual computing. IEEE Trans. Visual. Comput. Graph., 20(3): 413−425. []
https://doi.org/10.1109/TVCG.2013.249
4 C.C. Chang,, C.J. Lin,, 2011a. LIBSVM: a Library for Support Vector Machines. Available from
5 C.C. Chang,, C.J. Lin,, 2011b. LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol., 2(3): 27. []
https://doi.org/10.1145/1961189.1961199
6 D. Cosker,, E. Krumhuber,, A. Hilton,, 2011. A FACS valid 3D dynamic action unit database with applications to 3D dynamic morphable facial modeling. Proc. IEEE Int. Conf. on Computer Vision, p.2296−2303. []
https://doi.org/10.1109/ICCV.2011.6126510
7 P. Ekman,, 1993. Facial expression and emotion. Am. Psychol., 48(4): 384−392. []
https://doi.org/10.1037/0003-066X.48.4.384
8 P. Ekman,, W.V. Friesen,, 1978. Facial action coding system: a technique for the measurement of facial movement. Palo Alto.
9 B. Fasel,, J. Luettin,, 2003. Automatic facial expression analysis: a survey. Patt. Recog., 36(1): 259−275. []
https://doi.org/10.1016/S0031-3203(02)00052-3
10 R.I. Hg,, P. Jasek,, C. Rofidal,, et al., 2012. An RGB-D database using Microsoft’s Kinect for windows for face detection. Proc. 8th Int. Conf. on Signal Image Technology and Internet Based Systems, p.42−46. []
https://doi.org/10.1109/SITIS.2012.17
11 B.Y. Li,, A.S. Mian,, W.Q. Liu,, et al., 2013. Using Kinect for face recognition under varying poses, expressions, illumination and disguise. Proc. IEEE Workshop on Applications of Computer Vision, p.186−192. []
https://doi.org/10.1109/WACV.2013.6475017
12 D.X. Li,, C. Sun,, F.Q. Hu,, et al., 2013. Real-time performance-driven facial animation with 3ds Max and Kinect. Proc. 3rd Int. Conf. on Consumer Electronics, Communications and Networks, p.473−476. []
https://doi.org/10.1109/CECNet.2013.6703372
13 X.H. Ma,, Y.Q. Tan,, G.M. Zheng,, 2013. A fast classification scheme and its application to face recognition. J. Zhejiang Univ.-Sci. C (Comput. & Electron.), 14(7): 561−572. []
https://doi.org/10.1631/jzus.CIDE1309
14 Q.R. Mao,, X.L. Zhao,, Z.W. Huang,, et al., 2013. Speakerindependent speech emotion recognition by fusion of functional and accompanying paralanguage features. J. Zhejiang Univ.-Sci. C (Comput. & Electron.), 14(7): 573−582. []
https://doi.org/10.1631/jzus.CIDE1310
15 M.A. Nicolaou,, H. Gunes,, M. Pantic,, 2011. Continuous prediction of spontaneous affect from multiple cues and modalities in valence-arousal space. IEEE Trans Affect. Comput., 2(2): 92−105. []
https://doi.org/10.1109/T-AFFC.2011.9
16 A. Savran,, N. Alyüz,, H. Dibeklioğlu,, et al., 2008. Bosphorus database for 3D face analysis. Proc. 1st European Workshop on Biometrics and Identity Management, p.47−56. []
https://doi.org/10.1007/978-3-540-89991-4_6
17 B. Seddik,, H. Maamatou,, S. Gazzah,, et al., 2013. Unsupervised facial expressions recognition and avatar reconstruction from Kinect. Proc. 10th Int. Multi-conf. on Systems, Signals & Devices, p.1−6. []
https://doi.org/10.1109/SSD.2013.6564032
18 G. Stratou,, A. Ghosh,, P. Debevec,, et al., 2011. Effect of illumination on automatic expression recognition: a novel 3D relightable facial database. Proc. IEEE Int. Conf. on Automatic Face & Gesture Recognition and Workshops, p.611−618. []
https://doi.org/10.1109/FG.2011.5771467
19 Y. van den Hurk,, 2012. Gender Classification with Visual and Depth Images. MS Thesis, Tilburg University, the Netherlands.
20 A. Vinciarelli,, M. Pantic,, D. Heylen,, et al., 2012. Bridging the gap between social animal and unsocial machine: a survey of social signal processing. IEEE Trans. Affect. Comput., 3(1): 69−87. []
https://doi.org/10.1109/T-AFFC.2011.27
21 S.B. Xu,, G.H. Ma,, W.L. Meng,, et al., 2013. Statistical learning based facial animation. J. Zhejiang Univ.-Sci. C (Comput. & Electron.), 14(7): 542−550. []
https://doi.org/10.1631/jzus.CIDE1307
22 Z. Zeng,, M. Pantic,, G.I. Roisman,, et al., 2009. A survey of affect recognition methods: audio, visual, and spontaneous expressions. IEEE Trans. Patt. Anal. Mach. Intell., 31(1): 39−58. []
https://doi.org/10.1109/TPAMI.2008.52
23 X.X. Zhu,, D. Ramanan,, 2012. Face detection, pose estimation, and landmark localization in the wild. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.2879−2886. []
https://doi.org/10.1109/CVPR.2012.6248014
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed