Share / Export Citation / Email / Print / Text size:

International Journal on Smart Sensing and Intelligent Systems

Professor Subhas Chandra Mukhopadhyay

Exeley Inc. (New York)

Subject: Computational Science & Engineering, Engineering, Electrical & Electronic


eISSN: 1178-5608



VOLUME 8 , ISSUE 1 (March 2015) > List of articles


Shaoping Zhu * / Yongliang Xiao

Keywords : Facial expression, ASM model, Optical flow model, Bag of Words, Multi-Instance Boosting model.

Citation Information : International Journal on Smart Sensing and Intelligent Systems. Volume 8, Issue 1, Pages 581-601, DOI:

License : (CC BY-NC-ND 4.0)

Received Date : 02-October-2014 / Accepted: 24-January-2015 / Published Online: 01-March-2015



Human facial expressions detection plays a central role in pervasive health care and it is an active research field in computer vision. In this paper, a novel method for facial expression detection from dynamic facial images is proposed, which includes two stages of feature extraction and facial expression detection. Firstly, Active Shape Model (ASM) is used to extract the local texture feature, and optical flow technique is determined facial velocity information, which is used to characterize facial expression. Then, fusing the local texture feature and facial velocity information get the hybrid characteristics using Bag of Words. Finally, Multi-Instance Boosting model is used to recognize facial expression from video sequences. In order to be learned quickly and complete the detection, the class label information is used for the learning of the Multi-Instance Boosting model. Experiments were performed on a facial expression dataset built by ourselves and on the JAFFE database to evaluate the proposed method. The proposed method shows substantially higher accuracy at facial expression detection than has been previously achieved and gets a detection accuracy of 95.3%, which validates its effectiveness and meets the requirements of stable, reliable, high precision and anti-interference ability etc.

Content not available PDF Share



[1] I. Cohen, N. Sebe, A. Garg, et al, “Facial expression detection from video sequences: temporal and static modeling”, Computer Vision and Image Understanding, vol.1, No.91,2003, pp.160-187.
[2] S. Morishima and H. Harashima, “Emotion space for analysis and synthesis of facial expression”, Proc. 2nd IEEE Int. Workshop on Robot and Human Communication, 1993, pp. 188-193.
[3] C. Shan, S. Gong and P. W. McOwan, ”Robust facial expression detection using local binary patterns”, Proc. Int. Conf. on Image Processing, ICIP 2005, vol.2, No.2, 2005, pp.370-373.
[4] P. S. Aleksic and A. K. Katsaggelos, “Automatic facial expression detection using facial animation parameters and multistream HMMs”, IEEE Transactions on Information Forensics and Security, vol.1, No.1, 2006, pp. 3-11.
[5] N. Neggaz, M. Besnassi and A. Benyettou, ”Facial expression detection”, Journal of Applied Sciences, vol.15, No.10, 2010, pp. 1572-1579.
[6] Y. Q. Wang and , L. Liu, “New intelligent classification method based on improved meb algorithm”, International Journal on Smart Sensing and Intelligent Systems, vol. 07, No. 1, 2014, pp. 72-95.
[7] G. Zhao and M. Pietikäinen, “Boosted multi-resolution spatiotemporal descriptors for facial expression detection”, Pattern detection letters, vol. 12, No. 30, 2009, pp. 1117-1127.
[8] F. Y. Shih, C. F. Chuang and P. S. P. Wang, “Performance comparisons of facial expression detection in JAFFE database”, Int. J. Pattern Detection and Artificial Intelligence, vol.03, No.22, 2008, pp.445-459.
[9] S. Y. Fu, G. S. Yang and X. K. Kuai, “A spiking neural network based cortex-like mechanism and application to facial expression detection”, Computational Intelligence and Neuroscience, 2012, pp.1-13. Online publication date: 1-Jan-2012.
[10] C. Shan, S. Gong and P. W. McOwan, “Facial expression detection based on local binary patterns: A comprehensive study”, Image and Vision Computing, vol.06, No.27, 2009, pp. 803-816.
[11]D. C. Turk, C. Dennis and R. Melzack,” The measurement of pain and the assessment of people experiencing pain”, Handbook of Pain Assessment, ed D. C. Turk and R. Melzack, New York: Guilford, 2nd edition, 2001, pp. 1-11.
[12] L. Wang, R. F. Li, and K. Wang, ”A novel automatic facial expression detection method based on AAM”, Journal of Computers, vol.03, No.9, 2014, pp.608-617.
[13]K. M. Prkachin, ”The consistency of facial expressions of pain: a comparison across modalities”, Pain, vol. 05, No.3, 1992, pp.297-306.
[14]K. M. Prkachin and P. E. Solomon, “The structure, reliability and validity of pain expression: Evidence from patients with shoulder pain”, Pain, vol.139, No.2, 2008, pp.267-274.
[15] S. J. Zhang, B. Jiang and T. Wang, “Facial expression detection algorithm based on active shape model and gabor wavelet”, Journal of Henan University (Natural Science), vol.05, No.40, 2010, pp.521-524.
[16] W. Zhang and L. M. Xia, “Pain expression detection based on SLPP and MKSVM”, Int. J. Engineering and Manufacturing, No.3, 2011, pp. 69-74.
[17] K. W. Wan, K. M. Lam and K. C. Ng, “An accurate active shape model for facial feature extraction”, Pattern Detection Letters, vol.15, No.26, 2005, pp. 2409-2423.
[18] J. M. Lobo and M. F. Tognelli, “Exploring the effects of quantity and location of pseudo-absences and sampling biases on the performance of distribution models with limited point occurrence data”, Journal for Nature Conservation, vol. 19, No.1, 2011, pp.1-7.
[19]S. M. Bhandarkar and X. Luo, “Integrated and tracking of multiple faces using particle filtering and optical flow-based elastic matching”, Computer Vision and Image Understanding, vol. 06, No.113, 2009, pp. 708-725.
[20] B. K. Horn and B. G. Schunck, “Determining optical flow”, Artificial Intelligence,No.17, 1981, pp.185- 204.
[21] G. J. Burghouts and K. Schutte, “Spatio-temporal layout of human actions for improved bag-of-words action ”, Pattern Detection Letters, vol.15, No.34, 2013, pp.1861-1869.
[22] J. D. Keeler, D. E. Rumelhart and W. K. Leow, “Integrated segmentation and detection of hand-printed numerals”, 1990 NIPS-3: Proc. Conf. on Advances in neural information processing systems 3, San Francisco, CA, USA: Morgan Kaufmann Publishers Inc, 1990, pp. 557–563.
[23] T. G. Dietterich, R. H. Lathrop and T. Lozano-Perez, “Solving the multiple instance problem with axis-parallel rectangles”, Artificial Intelligence, No.89, 1997, pp. 31-71.
[24]A. Zafra, M. Pechenizkiy, and S. Ventura, “Relief-MI: an extension of relief to multiple instance learning”, Neurocomputing, No.75, 2012, pp.210-218.
[25] Y. X. Chen, J. B. Bi and J. Z. Wang, “MILES: multiple-instance learning via embedded instance selection”, IEEE Transaction Pattern Analysis and Machine Intelligence, No.28, 2006, pp. 1931-47.
[26] X. F. Song, L. C. Jiao, S. Y. Yang, X. R. Zhang, and F. H. Shang, “Sparse coding and classifier ensemble based multi-instance learning for image categorization”, Signal Processing, No.93, 2013, pp.1-11.
[27] P. Viola, J. Platt, and C. Zhang, “Multiple instance boosting for object ”, Advance in Neutral Information Processing System, No.18, 2006, pp.1419-1426.
[28] M. Nakamura, H. Nomiya and K. Uehara, “Improvement of boosting algorithm by modifying the weighting rule”, Annals of Mathematics and Artificial Intelligence, vol.1, No.41, 2004, pp. 95-109.
[29] T. G. Dietterich, R. H. Lathrop, and T. Lozano-Pérez, “Solving the multiple instance problem with axis-parallel rectangles”, Artificial Intelligence, vol.89, No.1, 1997, pp.31–71.
[30]S. Andrews and T. Hofmann, “Multiple instance learning via disjunctive programming boosting”, Advances in Neural Information Processing Systems, No.16, 2004, pp. 65-72.
[31] T. Quazi, S.C. Mukhopadhyay, N. Suryadevara and Y. M. Huang, Towards the Smart Sensors Based Human Emotion Recognition, Proceedings of IEEE I2MTC 2012 conference, IEEE Catalog number CFP12MT-CDR, ISBN 978-1-4577-1771-0, May 13-16, 2012, Graz, Austria, pp. 2365-2370.
[32] Y. T. Chen, C. S. Chen, Y. P. Hung ,et al, “Multi-class multi-instance boosting for part-based human ”, IEEE 12th Int. Conf. on. Computer Vision Workshops (ICCV Workshops), 2009, pp.1177-1184.
[33] G.Sengupta, T.A.Win, C.Messom, S.Demidenko and S.C.Mukhopadhyay, “Defect analysis of grit-blasted or spray printed surface using vision sensing technique”, Proceedings of Image and Vision Computing NZ, Nov. 26-28, 2003, Palmerston North, pp. 18-23.
[34] O. Yakhnenko and V. Honavar, “Multi-Instance multi-label learning for image classification with large vocabularies”, BMVC, 2011, pp.1-12.
[35] G. Sen Gupta, S.C. Mukhopadhyay and M Finnie, Wi-Fi Based Control of a Robotic Arm with Remote Vision, Proceedings of 2009 IEEE I2MTC Conference, Singapore, May 5-7, 2009, pp. 557-562.
[36] F. Cheng, J. Yu, H. Xiong, “Facial expression detection in JAFFE dataset based on Gaussian process classification”, IEEE Transactions on Neural Networks, vol.10, No.21, 2010, pp.1685-1690.