Image Processing Pipeline for Facial Expression Recognition under Variable Lighting

Image Processing Pipeline for Facial Expression Recognition under Variable Lighting



1 Introduction

Automated facial expression recognition has proven to be beneficial in a variety of settings.
For instance, in the Wall Lab of Stanford Medical School, expression recognition is used in
a Google Glass application that helps children and adults with Autism detect the emotions
of people they are interacting with. Thus, research into increased classification accuracy
for expression recognition can have great impact. Many studies addressing this subject use
images with uniform lighting conditions[1]. This is understandable because it allows for
accurate evaluation of the recognition algorithm. However, for most practical applications,
the emotions recognition task is done in real-world conditions where the lighting is diverse
and far from being uniform[2]. In this project, we aim to study the effects of different
lighting/shadowing on the emotions recognition task and find the best techniques to improve
emotions recognition under variable lighting.

Feature extraction and classifier algorithm can vary greatly in facial expression recog-
nition. For our project, we chose not to tackle those two modules of the classification
pipeline, but instead to optimize the accuracy of classifiers using only image processing.
The goal is to find an image processing technique that can be plugged into any feature
extraction pipeline without having to address the problems of lighting from later stages in
the classification pipeline.

2 Goals

• Generate data set with desired lighting conditions (see data for more information).
• Measure how expression recognition error varies with increased shadows or illumina-
tion in images.
• Develop image processing pipeline to minimize the recognition error under variable
lighting.
• Measure expression recognition error for different image angles. Then, minimize the
error under variable lighting for images of these different angles.

3 Data

For this project, we plan to use the Binghamton University 3D Facial Expression Database[3]
as a starting point to generate images of facial emotions under variable lighting conditions.
We will be using a Blender library to simulate different lighting conditions affecting the 3D
facial model and then projecting to 2D images. We believe this method will give us the
greatest freedom to vary the lighting in our data set and address a wide variety of real world
lighting problems.
We will not be using an Android device.

References

[1] Beat Fasel and Juergen Luettin. Automatic facial expression analysis: A survey. Pattern
Recognition, 36(1), 1999.
[2] Xiaoyang Tan and B. Triggs. Enhanced local texture feature sets for face recognition
under difficult lighting conditions. IEEE Transactions on Image Processing, 19(6).
[3] Lijun Yin; Xiaozhou Wei; Yi Sun; Jun Wang; Matthew J. Rosato. A 3d facial expression
database for facial behavior research. 7th International Conference on Automatic Face
and Gesture Recognition, 2006.

SOURCE CODE CLICK HERE 


Comments

Popular posts from this blog

Light Field Images for Background Removal

Using Image Processing to Identify and Score Darts thrown into a Dartboard

aerodynamic performance