- Open Access
LBP-based periocular recognition on challenging face datasets
© Mahalingam and Ricanek ; licensee Springer. 2013
- Received: 2 October 2012
- Accepted: 6 June 2013
- Published: 1 July 2013
This work develops a novel face-based matcher composed of a multi-resolution hierarchy of patch-based feature descriptors for periocular recognition - recognition based on the soft tissue surrounding the eye orbit. The novel patch-based framework for periocular recognition is compared against other feature descriptors and a commercial full-face recognition system against a set of four uniquely challenging face corpora. The framework, hierarchical three-patch local binary pattern, is compared against the three-patch local binary pattern and the uniform local binary pattern on the soft tissue area around the eye orbit. Each challenge set was chosen for its particular non-ideal face representations that may be summarized as matching against pose, illumination, expression, aging, and occlusions. The MORPH corpora consists of two mug shot datasets labeled Album 1 and Album 2. The Album 1 corpus is the more challenging of the two due to its incorporation of print photographs (legacy) captured with a variety of cameras from the late 1960s to 1990s. The second challenge dataset is the FRGC still image set. Corpus three, Georgia Tech face database, is a small corpus but one that contains faces under pose, illumination, expression, and eye region occlusions. The final challenge dataset chosen is the Notre Dame Twins database, which is comprised of 100 sets of identical twins and 1 set of triplets. The proposed framework reports top periocular performance against each dataset, as measured by rank-1 accuracy: (1) MORPH Album 1, 33.2%; (2) FRGC, 97.51%; (3) Georgia Tech, 92.4%; and (4) Notre Dame Twins, 98.03%. Furthermore, this work shows that the proposed periocular matcher (using only a small section of the face, about the eyes) compares favorably to a commercial full-face matcher.
- Recognition Performance
- Local Binary Pattern
- Georgia Tech
- Template Aging
- Periocular Region
The field of biometrics has made significant accomplishments over the last 20 years. Biometric systems are now deployed in dozens of countries for a host of purposes from national identification to access, to amusement parks, to automatic log in for computing devices. As the technology matures, users demand better performance against non-ideal (poor) biometric signals, e.g., border crossing systems should be able to capture the biometric signal of the iris or face while patrons are moving or computers should be able to authenticate patron credentials 10 years or more after enrollment without the requirement of template updating. Deployers as well as end users of biometric systems demand more flexibility in acquiring the biometric signal and better performance against matching to biometric templates that differ due to pose, illumination, expression, and aging.
Non-ideal biometrics, which is also known as unconstrained biometrics, are systems that do not force (constrain) the user to submit their biometric signal (face, iris, fingerprint, etc.) in a purposed manner. Furthermore, they are systems that can perform robust matching against templates that have been acquired under non-ideal or bad conditions. Non-ideal face recognition is recognition systems that are capable of matching well against probe images that may exhibit poor image quality, low image resolution, poor lighting, occlusions and disguises, heavy pose variation, and/or moderate to severe expression or face contortions. Non-ideal face must also contend with aging and the challenges of matching under aging as well as dealing with the case of matching in the presence of extremely similar faces, i.e., discriminating between identical twins.
Periocular-based recognition has gained increasing attention from biometric researchers recently. Park et al.  studied the use of the periocular region as a useful biometric when iris recognition fails. The authors proposed a matching scheme with three descriptors: gradient orientation, local binary pattern (LBP), and SIFT. Their experimental comparison of periocular-based recognition with that of face recognition under occlusion showed superior performance of the periocular recognition system. Similarly,  and  illustrated the effectiveness of periocular-based features for recognition using images focused on capturing the iris. Both the periocular skin texture  and its appearance cues [5, 6] were used for recognition. Padole and Proenca  studied the performance of a periocular-based recognition system under the influence of scale, pose, occlusion, etc. and concluded that the performance of the recognition system degrades with the presence of such covariates. Xu et al.  proposed an age-invariant recognition system based on periocular features against a small dataset of longitudinal images.
Periocular features have also been used to identify other soft biometric cues such as gender , which showed the use of shape-based features of the eyebrow for biometric recognition and for gender classification purposes. Studies indicate the usefulness of such features for the task of verification by humans using near-infrared periocular images [10, 11]. Although prior works have studied the performance of periocular-based features under various scenarios, no work has focused on recognition performance in challenging real-world datasets that include images captured under extreme conditions, e.g., occlusions, poor lighting, being scanned from hard copy photographs, pose variations, aging, or twins.
The rest of the paper is organized as follows: Section 2 provides a detailed explanation of the proposed periocular recognition framework and the hierarchical three-patch local binary pattern (H-3P-LBP). Section 3 addresses the experiments conducted under covariates, including the experimental setup, the datasets used, the preprocessing steps, and the results. Section 4 provides the conclusions drawn and future work.
The task of recognition (face or periocular) generally includes the following sequence of steps: preprocessing (image alignment, noise removal, illumination correction, etc.), feature extraction, and matching. For this study, we compute the periocular feature descriptors using uniform LBP , 3P-LBP , and its variant hierarchical three-patch local binary pattern (H-3P-LBP) (proposed). The uniform LBP follows a pixel-based approach with respect to computing the LBP code of a pixel and its neighboring sampling points, while the other two descriptors are patch-based approaches. Patch-based computation of texture patterns encodes the similarities between neighboring patches of pixels and thus captures information which is complementary to that of pixel-based descriptors. The patch-based textures treat colored regions, edges, lines, and complex textures in a unified way unlike pixel-based techniques.
2.1 Feature description using hierarchical 3P-LBP
where τ is set to a value slightly larger than zero in order to provide stability in uniform regions as indicated in .
The multi-scale 3P-LBP can be extracted either by varying the radii or by extracting 3P-LBP from different image scales. However, the first approach has its own shortcomings in the way the conventional 3P-LBP is applied to the image. The conventional approach typically extracts microstructures (edges, corners, spots, etc.) of the images, while the hierarchy allows for the extraction of both micro- and macro-structures , which are required for effective texture extraction and discrimination. The stability of 3P-LBP decreases with the increase in neighborhood radii due to minimal correlation of the sampling points with the center pixel. Also, the sparse sampling by 3P-LBP from a large neighborhood radii may not result in an adequate representation of the two-dimensional image signal. These observations are verified from the experimental results of various challenging datasets.
2.2 Match score generation
where α denotes the weighting factor. The optimal value of α was determined off-line using a grid search method based upon randomly selected subset of the four datasets used for this work. The α value used in this work is 0.7. The match scores are fused using the weighted sum rule without any score normalization. Earlier research work  suggests that the recognition accuracy of the left periocular region (left from the observers’ perspective) is significantly higher than that of the right periocular region. Although it has been shown that the left periocular region is more discriminative than the right, the reasoning behind this observation needs further investigation. The selected weighting factor is in accordance with this observation, providing more weight to the left periocular region.
This section provides a detailed discussion on the datasets used for the study, the recognition experiments, and their results.
The following databases were used in our experiments. These databases include face images of subjects taken under unconstrained conditions such as variations in pose, expression, and illumination; presence of glasses; facial hair; occlusions; etc. Also, these databases are publicly available and, hence, are more suitable for the research community to evaluate against and compare their results to.
3.1.1 Georgia Tech face database
3.1.2 MORPH aging database
MORPH Album1  consists of 1,690 scanned photographs of 515 individuals taken over an interval of time. The age of the images range from 15 to 68 years with the age gap between the first image and the last image ranging from 46 days to 29 years. The face images of Album 1 are frontal or near-frontal images under many types of illumination and eye region occlusions. This album of the MORPH dataset has been used for several years to evaluate the performance of recognition under aging [24, 25]. Figure 3 shows sample images from the MORPH database illustrating the various challenges involved with the images.
3.1.3 WVU/ND twins database
The Twins database  is comprised of multi-modal biometric information from pairs of identical and fraternal twins who attended the 2010 Twins Day Festival in Twinsburg, Ohio. The database consists of 6,863 2D color face images from 240 subjects and were collected under varying lighting conditions (indoor/outdoor), expression (neutral/smile), and pose (frontal/non-frontal). Each image is of resolution 600×400 pixels. For our experiments, we used only the images of 100 pairs of identical twins and a triplet. (The identical twins/triplet images were used solely due to the very difficult nature of matching against them.) The images with neutral expression and with a frontal pose alone were included. Figure 3 shows sample images from the Twins face database.
3.1.4 FRGC database
The FRGC database includes around 16,000 images of 466 subjects collected at the University of Notre Dame during the academic years 2002 to 2003 and 2003 to 2004. The images for a subject session include four controlled still images, two uncontrolled still images, and a three-dimensional image. The controlled images were taken under studio lighting conditions and two facial expressions (neutral and smile).
3.2 Image alignment
3.3 Periocular region segmentation
The periocular region is extracted from the aligned face image prior to the feature descriptor computation. There are no standard guidelines in existing literature that clearly define the periocular region. Often, the periocular region is defined as the skin region around the eyes, the eyes, and the eyebrows. The eyebrows are generally included in the periocular region since it helps in discrimination between subjects. The region as defined above is more accurately known as the periorbital region, which relates to the bony structure of the eye orbit and the soft tissue around this structure. Periocular correctly refers to the soft tissue of the region internal to the eye orbit. However, for this work, we adopt the periocular term currently used in the literature . In this work, the segmentation of the periocular region includes the eyes, eyebrows, and the skin region around the eyes. We perform an automatic segmentation using the coordinates of the eye centers in the aligned image. The automatic segmentation is feasible due to the placement of the eye centers on standard pixel locations during the alignment process. A region of size 128×128 pixels centered by the eye center is extracted for both the left and right eye region from the aligned image. It is to be noted that the iris is not masked from the extracted periocular images, which can have some effect on the recognition performance. Some researchers have chosen to mask the eyeball area and utilized information from the shape of the eye and the eyebrow region. However, the surface level texture of the iris can provide additional cues and, hence, can help improve the recognition accuracy. Hence, in this work, we match against both open and closed eyes.
3.4 Effect of periocular image size vs. the number of image scale
This experiment was designed to analyze the effect of the extracted periocular image size and the number of image levels in the H-3P-LBP computation on the recognition performance. The frontal and neutral expression images from the ND Twins database were utilized for this experiment. Choi et al.  have shown that an inter-pupillary distance (IPD) of at least 60 pixels is required for successful recognition. The IPD varies with the image size, and hence, variations in the image size can significantly affect the recognition performance. It is to be noted that the IPD is varied in our experiments by varying the size of the extracted periocular region individually rather than resizing the aligned full-face image. In addition to the periocular image size, the number of scales in the image pyramid computed for the H-3P-LBP can have an impact on the recognition performance as images of different sizes are considered at each level of the pyramid.
Rank-1 accuracies ND Twins effect of periocular image size and image scale
Pyramid levels ( P), image size ( S × S)
P = 4,S = 256
P = 4,S = 128
P = 3,S = 256
P = 3,S = 128
P = 3,S = 100
P = 1,S = 256
P = 1,S = 128
3.5 Recognition accuracy
The periocular recognition performance was studied using the datasets described in Section 3.1. A closed set identification was performed for all the experiments; hence, no subject was considered an impostor during recognition. Each dataset was divided equally into gallery and probe, where the images for the gallery and probe for a subject were randomly selected. Every probe image was compared against all the gallery images using uniform LBP, 3P-LBP, and H-3P-LBP matching techniques. The results of the experiments are provided in terms of cumulative match characteristic (CMC) curves and in terms of the rank-1 recognition accuracy. Matching was performed for the left-left and right-right gallery-probe periocular image pair as previous research has indicated that the left and right regions are sufficiently different. The left and right periocular regions were determined based on the location of the nose with respect to the inner corner of the eye. In other words, the left and right periocular regions were defined from the subject’s perspective.
Rank-1 accuracies for the left and right periocular regions
Rank-1 accuracies using fusion of scores from the left and right periocular regions
MORPH Album 1 (%)
Georgia Tech (%)
Twins DB (%)
From these results, it can be noticed that patch-based approaches perform better when compared with the pixel-based computation of LBP. The performance of all the descriptors on MORPH Album 1 indicates the significance of effects such as template aging, pose variations, expression changes, etc. in periocular recognition. Also, the images of MORPH are scanned photographs in contrast with the other two datasets. This indicates the need for better-matching algorithms in case of recognizing subjects using scanned low-resolution images. It is also to be noted that there is a significant difference with the recognition accuracies for the left and right periocular regions, which indicates the profile-specific features that are extracted by the descriptors. The matching accuracies indicate that the performance was improved by computing the 3P-LBP in a hierarchical fashion. This is due to the extraction of micro- and macro-patterns, both of which are required for better texture discrimination. The best recognition performance was achieved when the left and right periocular scores are fused together, which indicates the use of side information and side-specific features for better recognition.
3.6 Recognition under non-ideal conditions
The performance of all the descriptors was analyzed from the perspective of matching gallery and probe images with the following scenarios: (1) neutral-neutral (expressions), (2) neutral-smile, (3) smile-smile, (4) frontal pose - non-frontal pose, (5) non-frontal pose - frontal pose, (6) non-frontal pose - non-frontal pose, (7) no glasses-with glasses, and (8) eyes open-eyes closed. In addition, the effect of template aging on periocular recognition was also studied. The Georgia Tech face database and the MORPH Album 1 were utilized for this study. Recognition experiments were conducted using the images that were categorized based on the presence of the above mentioned factors.
For the neutral-neutral gallery-probe scenario, one image from each subject was used in the gallery, and the remaining images were used as probe. The experiment studying the effect of template aging utilized the youngest images as gallery and the older images as probe. This can be correlated with the real-world scenario of verifying passports or in security, where an already-enrolled young-aged image is compared against the later aged one. For the remaining experiments, all the images from the respective subsets were used as gallery and probe for each scenario.
3.6.1 Expression variations
Images from the Georgia Tech face database were used for these experiments. The effect of change in expressions in the periocular region was analyzed by comparing the neutral expression image with those having a smiling expression. The database included 613 images from 50 subjects with neutral expression and 122 images from 39 subjects with a smiling expression.
Rank-1 accuracies for neutral-neutral (gallery-probe) matching for Georgia Tech face database
Rank-1 accuracies for neutral-smile (gallery-probe) matching for Georgia Tech database
Rank-1 accuracies for smile-smile (gallery-probe) matching for Georgia Tech database
3.6.2 Pose variations
Rank-1 accuracies for frontal - non-frontal (gallery-probe) matching for Georgia Tech database
Rank-1 accuracies for non-frontal - frontal (gallery-probe) matching for Georgia Tech database
Rank-1 accuracies for non-frontal - non-frontal (gallery-probe) matching for Georgia Tech database
3.6.3 Template aging
Rank-1 accuracies obtained with age-varying data from MORPH Album 1
3.6.4 Effect of closed eyelids
Rank-1 accuracies obtained using eyelid closed images from Georgia Tech database
3.6.5 Effect of eyeglasses
Rank-1 accuracies obtained with images from Georgia Tech database and ND Twins database with eyeglasses
In this paper, we investigated the performance of LBP and its variants in periocular-based recognition using unconstrained face images. We proposed the multi-scale, hierarchical three-patch LBP framework, which is a variant of the three-patch LBP. The matching performance was evaluated using the uniform LBP, three-patch LBP, and the hierarchical three-patch LBP. The effects of covariates such as pose variations, facial expression, template aging, and occlusions on periocular recognition performance were discussed. Experiments on four challenging datasets yield the best recognition results for the proposed method when compared with LBP and its variants. Experiments indicate that the best results were achieved when matching was performed for both the left and right periocular regions individually and then fusing their scores. The results also indicate that there is significant discrimination between the left and right periocular region of the same subject. The performance of the patch-based LBPs can be improved when images with neutral expressions are used. However, the uniform LBP is significantly robust for both neutral and varied expressions on the face.
There is a significant effect on the recognition performance due to large pose variations, while the effects of minimal pose variations are insignificant due to the pose-invariant nature of the LBP operator. Aging effects are prominent in the periocular region of a face, which increases the intra-class dissimilarities as the age gap between the gallery and probe increases. Our experiments also indicate that conventional LBP and its variants fail to capture these age-based differences.
Masking of the iris and the eye region has an impact on the performance of patch-based descriptors, while it improves the performance of pixel-based LBP. While the presence of eyeglasses help face recognition systems, they degrade the performance of a periocular recognition system. The performance of periocular recognition could be further improved with the consideration of cues such as eyelashes, eye shape, and size.
In future work, we will explore the use of different distance measures for matching as the Euclidean distance has been proven to not be the most robust in the face domain. Furthermore, we will explore developing texture-based features that are resilient to aging changes. Developing a texture-based age-invariant texture technique would have far-reaching and sweeping impacts on face-based biometric techniques.
This work was partially funded by the Biometric Center of Excellence, United States Federal Bureau of Investigation, and CASIS supported by the Army Research Laboratory.
- Park U, Ross A, Jain AK: Periocular biometrics in the visible spectrum: a feasibility study. In Proceedings of the 3rd IEEE International Conference on Biometrics: Theory, Applications, and Systems. IEEE Piscataway; 2009:153-158.Google Scholar
- Bharadwaj S, Bhatt HS, Vatsa M, Singh R: Periocular biometrics: when iris recognition fails. In Proceedings of the 4th IEEE International Conference on Biometrics: Theory, Applications, and Systems. IEEE Piscataway; 2010:1-6.Google Scholar
- Woodard D, Pundlik S, Miller P, Jillela R, Ross A: On the fusion of periocular and iris biometrics in non-ideal imagery. In Proceedings of the 20th International Conference on Pattern Recognition (ICPR). IEEE New York; 2010.Google Scholar
- Miller P, Rawls A, Pundlik S, Woodard D: Personal identification using periocular skin texture. In Proceedings of the 2010 ACM Symposium on Applied Computing. ACM New York; 2010.Google Scholar
- Woodard D, Pundlik S, Lyle J, Miller P: Periocular region appearance cues for biometric identification. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE New York; 2010.Google Scholar
- Lyle JR, Miller PE, Pundlik SJ, Woodard DL: Soft biometric classification using local appearance periocular region features. Pattern Recognit 2012, 45(11):3877-3885. 10.1016/j.patcog.2012.04.027View ArticleGoogle Scholar
- Padole CN, Proenca H: Periocular recognition: analysis of performance degradation factors. In The 5th IAPR International Conference on Biometrics (ICB). IEEE New York; 2012:439-445.View ArticleGoogle Scholar
- Xu FJ, Luu K, Savvides M, Bui TD, Suen CY: Investigating age invariant face recognition based on periocular biometrics. In International Joint Conference on Biometrics (IJCB). IEEE New York; 2011.Google Scholar
- Dong Y, Woodard DL: Eyebrow shape-based features for biometric recognition and gender classification: a feasibility study. In International Joint Conference on Biometrics (IJCB). IEEE New York; 2011:1-8.Google Scholar
- Hollingsworth KP, Bowyer KW, Flynn PJ: Useful features for human verification in near-infrared periocular images. Image Vis. Comput 2011, 29(11):707-715. 10.1016/j.imavis.2011.09.002View ArticleGoogle Scholar
- Hollingsworth KP, Darnell SS, Miller PE, Woodard DL, Bowyer KW, Flynn PJ: Human and machine performance on periocular biometrics under near-infrared light and visible light. IEEE Trans. Inf. Forensics Secur 2012, 7(2):588-601.View ArticleGoogle Scholar
- Nefian AV, Hayes MH: Maximum likelihood training of the embedded HMM for face detection and recognition. In International Conference on Image Processing. IEEE New York; 2000.Google Scholar
- Philips PJ, Flynn PJ, Bowyer KW, Bruegge RWV, Grother PJ, Quinn GW, Pruitt M: Distinguishing identical twins by face recognition. In Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition. IEEE New York; 2011:15-192.Google Scholar
- Phillips PJ, Flynn PJ, Scruggs T, Bowyer KW, Worek W: Preliminary face recognition grand challenge results. In Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition. IEEE New York; 2006:15-24.View ArticleGoogle Scholar
- Ricanek Jr K, Tesafaye T: MORPH: A longitudinal image database of normal adult age-progression. In Proceedings of the 7th International Conference on Automatic Face Gesture Recognition. IEEE New York; 2006:341-345.View ArticleGoogle Scholar
- Chen L, Man H, Nefian AV: Face recognition based multi-class mapping of Fisher scores. Pattern Recognit., Spec. Issue Image Underst. Digital Photographs 2005, 38(6):799-811.Google Scholar
- Le THN, Luu K, Seshadri K, Savvides M: A facial aging approach to identification of identical twins. In IEEE 5th International Conference on Biometrics: Theory, Applications, and Systems. IEEE Piscataway; 2012.Google Scholar
- Albert AM, Ricanek Jr K: The MORPH database: investigating the effects of adult craniofacial aging on automated face-recognition technology. Forensic Sci. Commun 2008., 10(2):Google Scholar
- Maenpaa T, Ojala T, Pietikainen M, Sori-ano M: Robust texture classification by subsets of local binary patterns. In Proceedings 15th International Conference on Pattern Recognition. IEEE New York; 2000.Google Scholar
- Wolf L, Hassner T, Taigman Y: Descriptor based methods in the wild. Faces in Real-Life Images Workshop in ECCV 2008.Google Scholar
- Shechtman E, Irani M: Matching local self-similarities across images and videos. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition. IEEE New York; 2007:1-8.Google Scholar
- Lee SW, Li SZ: Learning multi-scale block local binary patterns for face recognition. In LNCS 4642, ICB’07. Springer Berlin; 2007:828-837.Google Scholar
- Woodard D, Pundlik S, Lyle J, Miller P: Periocular region appearance cues for biometric identification. In Computer Vision and Pattern Recognition Workshops. IEEE New York; 2010:162-169.Google Scholar
- Ricanek K, Boon E: The effect of normal adult aging on standard PCA face recognition accuracy rates. In Proceedings of the 2005 IEEE International Joint Conference on Neural Networks (IJCNN). IEEE New York; 2005:2018-2023.Google Scholar
- Ricanek K, Boone E, Patterson E: Craniofacial aging on the eigenface biometric. In Proceedings of the 6th IASTED Visualization, Imaging, and Image Processing (VIIP). Palma de Mallorca; 20–30 August 2006:249-253.Google Scholar
- FaceVACS Software Developer KitCognitecSystemsGmbH 2012.http://www.cognitec-systems.de
- Philips PJ, Beveridge JR, Draper B, Givens G, O’Toole A, Bolme D, Dunlop J, Lui YM, Sahibzada H, Weimer S: An introduction to the good, the bad, and the ugly face recognition challenge problem. Image Vis. Comput 2012, 30(3):206-216. 10.1016/j.imavis.2011.11.001View ArticleGoogle Scholar
- Choi HC, Park U, Jain AK: PTZ camera assisted face acquisition, tracking and recognition. Biometrics: Theory, Appl., Syst. 2010, 1-6.Google Scholar
- Ling H, Soatto S, Ramanathan N, Jacobs D: Face verification across age progression using discriminative methods. IEEE Trans. Inf. Forensics Secur 2010, 5: 82-91.View ArticleGoogle Scholar
- Mahalingam G, Kambhamettu C: Face verification across age progression using AdaBoost and local binary patterns. In Proceedings of the 7th Indian Conference on Computer Vision, Graphics and Image Processing. ACM New York; 2010:101-108.Google Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.