AUB ScholarWorks

Facial expression recognition from images for various head poses -

Show simple item record

dc.contributor.author Trad, Chadi Hanna,
dc.date.accessioned 2017-08-30T14:06:09Z
dc.date.available 2017-08-30T14:06:09Z
dc.date.issued 2015
dc.date.submitted 2015
dc.identifier.other b18374104
dc.identifier.uri http://hdl.handle.net/10938/10668
dc.description Thesis. M.E. American University of Beirut. Department of Electrical and Computer Engineering, 2015. ET:6299
dc.description Advisor : Dr. Hazem Hajj, Associate Professor, Electrical and Computer Engineering ; Members of Committee : Dr. Fadi Karameh, Associate Professor, Electrical and Computer Engineering ; Dr. Wassim El-Hajj, Associate Professor, Computer Science ; Dr. Daniel Asmar, Associate Professor, Mechanical Engineering.
dc.description Includes bibliographical references (leaves 35-37)
dc.description.abstract In the process of facial expression detection from image or video modalities, the variation of head poses with respect to the camera causes a challenging problem for any robust recognition. Several studies have been conducted on the effect of the pose on the recognition rate. The prevalent methodology to solve this problem consists of transforming the facial features back to a frontal pose before inferring the facial expression. Some work has further considered splitting the face into multiple parts then performing a simple maximum combination of the classifications. In this work, we propose a new approach for splitting and fusing the facial features in cases with head yaw rotations. The approach consists of splitting the face into left and right features. Then, two methods are proposed to classify the facial expression. In the first method, we detect facial Action Units (AUs) in the left and right parts then combine the results using a logical OR operation. In the second method, we propose an optimized fusion of the facial expressions. The outcome of the optimized method is a set of weights to combine the classifications from each side of the face at different yaw angles. The weights are determined dynamically based on the yaw angle of the head through a polynomial regression. Experiments were conducted on the two methods using a custom-made database and a set of benchmark 3D facial images. The results showed a 7.1percent improvement for our proposed split-and-fuse method over full facial features approach. Furthermore, the optimized fusion method showed superiority in comparison to max-based fusion.
dc.format.extent 1 online resource (x, 37 leaves) : illustrations ; 30cm
dc.language.iso eng
dc.relation.ispartof Theses, Dissertations, and Projects
dc.subject.classification ET:006299
dc.subject.lcsh Facial expression -- Computer simulation.
dc.subject.lcsh Artificial intelligence.
dc.subject.lcsh Image processing.
dc.subject.lcsh Computer vision.
dc.subject.lcsh Pattern perception.
dc.title Facial expression recognition from images for various head poses -
dc.type Thesis
dc.contributor.department Faculty of Engineering and Architecture.
dc.contributor.department Department of Electrical and Computer Engineering,
dc.contributor.institution American University of Beirut.


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search AUB ScholarWorks


Browse

My Account