Human recognition based on retinal images and using new similarity function
- Amin Dehghani†1Email author,
- Zeinab Ghassabi2,
- Hamid Abrishami Moghddam†3 and
- Mohammad Shahram Moin4
© Dehghani et al.; licensee Springer. 2013
Received: 25 January 2013
Accepted: 9 September 2013
Published: 31 October 2013
This paper presents a new human recognition method based on features extracted from retinal images. The proposed method is composed of some steps including feature extraction, phase correlation technique, and feature matching for recognition. In the proposed method, Harris corner detector is used for feature extraction. Then, phase correlation technique is applied to estimate the rotation angle of head or eye movement in front of a retina fundus camera. Finally, a new similarity function is used to compute the similarity between features of different retina images. Experimental results on a database, including 480 retinal images obtained from 40 subjects of DRIVE dataset and 40 subjects from STARE dataset, demonstrated an average true recognition accuracy rate equal to 100% for the proposed method. The success rate and number of images used in the proposed method show the effectiveness of the proposed method in comparison to the counterpart methods.
Nowadays, reliable, fast, and accurate recognition of a person is an important need for security systems, and recent advances in technology and increasing the need for security require the use of reliable biometric systems . The biometric attributes used in security system for recognition include finger print, hand geometry, face, iris, and retina [2–8]. Each biometric feature has its strengths and weaknesses, and the choice depends on the application. Among these methods, the retina provides a higher level of security for recognition due to uniqueness and the stability of the blood vessel pattern during one's life. However, because of technological limitation in manufacturing specific imaging apparatus, its usage is limited to places where high security level is needed. Depending on the application context, a biometric system may operate in verification or identification mode. In the verification mode, the system confirms or denies a person's claimed identity by comparing the captured biometric features with the own biometric template(s) stored in the database. In the identification mode, the system recognizes an individual by searching the templates of all users in the database for a match . Therefore, for identification at first step, the security system must decide about the existence of a person. After confirming the existence, the security system must determine the identity of that person. Most recognition methods based on the retinal images are used only in identification or verification mode.
Recognition methods based on the retinal images are divided in two different classes. Some algorithms are based on the vessel segmentation results of retinal images. Therefore, the computational time of these methods is high. Some other algorithms are based on the features extracted from retinal images, and usually, the implementation time of these methods is low. Most of the last proposed methods are effective only for identification or verification mode. In this paper, a new identification and verification method based on the retina images is presented. The proposed method contains some steps including Harris corner detector for feature extraction, phase correlation technique to estimate the rotation angle of head or eye movement in front of the retina fundus camera, and a new similarity function to compute the similarity between the features of dataset and test images. At first step, Harris corner detector is applied as a fast detector for feature extraction. On the other hand, the most important problem in using retinal images for human recognition is head or eye movement in front of fundus camera. The last proposed methods did not mention any solution for head or eye movement that may make mistake in the results of human recognition based on the retina images. In this paper, phase correlation technique is used to estimate and compensate the rotation angle of head or eye movement in front of the fundus camera. Finally, using a new similarity function, the similarity of each test image to each retina image of the dataset is determined.
The rest of the paper is organized as follows. Section 2 is devoted to the review of the last proposed methods for human recognition using retina images. In Section 3, Harris corner detector is used for feature extraction. Section 4 contains phase correlation technique to compensate the rotation angle of head or eye movement in front of the retina fundus camera. Section 5 is devoted to similarity function for feature matching. Recognition method and evaluation results are presented in Section 6. Finally, conclusion remarks are given in the last section.
2. Review of the last proposed methods
Farzin et al.  proposed a novel method based on the features obtained from retinal images. This method was composed of three principal modules including blood vessel segmentation, feature generation, and feature matching. Blood vessel segmentation module had the role of extracting blood vessel pattern from retinal images. Feature generation module included optic disc detection and selecting circular region around the optic disc of the segmented image. Then, using a polar transformation, a rotation invariant template was created. Next, these templates were analyzed in three different scales using wavelet transform to separate vessels according to their diameter sizes. In the last stage, vessel position and orientation in each scale were used to define a feature vector for each subject in the database. For feature matching, they introduced a modified correlation measure to obtain a similarity index for each scale of the feature vector. Then, they computed the total value of the similarity index by summing scale-weighted similarity indices. The computational time of this method for segmentation, feature extraction, and matching is high, and this method was used only in identification mode.
Xu et al.  proposed a new method for recognition. They used the green grayscale ocular fundus image. At first step, the skeleton feature of optic fundus blood vessel using contrast-limited adaptive histogram equalization was extracted. After the filtering treatment and extracting shape feature, shape curve of blood vessels was obtained. Shape curve matching was later carried out by means of reference point matching. In their method for recognition, feature matching consisted of finding affine transformation parameters which relates the query image and its best corresponding enrolled image. The computational time of this algorithm is high because a number of rigid motion parameters should be computed for all possible correspondences between the query and enrolled images in the dataset.
Ortega et al.  used a fuzzy circular Hough transform to localize the optic disc in the retina images. Then, they defined feature vectors based on the ridge endings and bifurcations of vessels obtained from a crease model of the retinal vessels inside the optical disc. For matching, they used a similar approach as in  to compute the parameters of a rigid transformation between feature vectors which gives the highest matching score. This algorithm is computationally more efficient with respect to the algorithm presented in . In , new methods were used to decrease the number of transformations instead of using points with specific distance from optic disc center. This method was used only in the verification mode.
Tabatabaee et al.  presented a new algorithm based on the fuzzy C-means clustering algorithm. They used Haar wavelet and snakes model for optic disc localization. The Fourier-Mellin transform coefficients and simplified moments of the retinal image was used as extracted features for system. The computational cost and implementation time of this algorithm are high, and the performance of the algorithm has been evaluated using a small dataset.
Shahnazi et al.  proposed a new method based on the wavelet energy feature (WEF) which is a powerful tool of multi-resolution analysis. WEF can reflect the wavelet energy distribution of the vessels with different thickness and width in several directions at different wavelet decomposition levels (scales), and thus, its ability to discriminate retinas is very strong. Simple computation was another virtue of WEF. Using semi-conductors and various environmental temperatures in electronic imaging systems cause noisy images, and thus, they used noisy retinal images for recognition. This method is based on the segmentation results of retinal images and was evaluated only on DRIVE dataset.
Oinonen et al.  proposed a novel method for verification based on minutiae features. The proposed method consisted of three steps: blood vessel segmentation, feature extraction, and feature matching. In practice, vessel segmentation can be viewed as a preprocessing phase for feature extraction. Then, vessel crossings and their orientation information were obtained. These data were matched with the corresponding ones from the comparison image. The method used the vessel direction information for improved matching robustness. The computational time of this method for segmentation, feature extraction, and matching is high, and this method was used in verification mode.
3. Feature extraction
3.1 Corner detection algorithms
We consider the corner as the intersection of two edges or as a point that there are two dominant and different edge directions in its local neighborhood. There are several methods for corner detection [20–24]. Harris corner detector is one of the most well-known corner detectors and is based on measuring the corner strength. Comparison between Harris corner detector and other corner detectors like SUSAN shows the superiority of Harris corner detector in the case of stability and complexity (running time) . Therefore, Harris corner detector is used for feature extraction.
3.2 Harris algorithm
Step 1: Applying the gradient function in x and y direction.
Step 2: The matrices A, B, and C in Equation (5) are calculated.
Step 3: The convolutions of A, B, and C with Gaussian window in Equation (6) are calculated.
Step 4: R in Equation (9) is obtained.
Step 5: Apply thresholding (Th) on the corners to eliminate the points which are not corners.
Step 6: Finally, for pruning the corners, a window is used around each point obtained after applying the threshold. Then, for each window, the point with the maximum value is considered as a corner.
4. Rotation compensation
One of the most important problems in using retina images for recognition is the head or eye movement in front of the fundus camera. Therefore, a method based on the Fourier-Mellin transform is used to estimate the rotation angle of head or eye movement .
4.1 Phase correlation technique
By detecting the peak of δ(x - x 0, y - y 0), the translation parameters (x 0, y 0) can be acquired.
4.2 Phase correlation technique for detection of translation and rotation
5. Similarity function
For recognition based on the corners obtained from vessel crossings, a new similarity function is used to match the features in dataset and test images. In contrary to other methods such as proposed method in , instead of finding the similar points in two sets of points, at first step, a model that describes the relations between points in each set of points or image is used without needing to segment vessels in retina images. Then, we obtain the similarity between the model functions of different images to calculate the similarity between the images. Therefore, in this method, the number of similar points in two different images does not play an important role in similarity results between two different images because the similarity between the model function of different images is determined for matching and recognition.
5.1 Model construction
The function model describes the closeness and similarity of a given feature vector (v) to all the feature vectors around the interest point and α is a constant that takes different value. Therefore, using the Equation (18), we obtain a model that describes the closeness and relation between the corners in each set of points to obtain their similarities. Finally, Equation (19) is used to calculate the similarity between model function of different images obtained by Equation (18).
where T is a predetermined thresholding and is set to 0.3. In this paper, to decrease the computational time, only r jk is used to compose model function.
6. Proposed method
Previous sections were devoted to feature extraction (using corner detector) and feature matching methods. This section explains the proposed method for recognition.
At the first step, for images of the dataset, the Harris corner detector is applied to extract corners as features. The parameters used in the Harris corner detector are k = 0.17, Th = 7×104, and a 7 × 7 window for pruning the corners, and corners with distance from the center of image is less than 190 pixels are used for recognition. Then, similarity model explained in Section 5 is used to formulate a set of vectors for each retina image.
6.1 Test images
As we know, the most important problem in using retinal images for recognition is head or eye movement in front of the fundus camera. We estimate the rotation angle of test image to each image in dataset images using phase correlation technique. Then, for each test image, we rotate it as the negative angle obtained using phase correlation technique to each dataset image. For the new image, we apply Harris corner detector and to decrease the computational cost; corners with distance from the center of image is less than 190 pixels are used. Finally, we use the similarity model explained to obtain the similarity between the corners. Therefore, for each test image, some sets of similarity vectors and model functions based on the number of images in dataset are obtained.
6.2 Recognition method
Our recognition method contains two steps. For each test image, we determined its rotation angle to each image in dataset, and we rotate the test image as negative angle obtained. Then, a region with 460 × 460 pixels around the center of new image is extracted. The absolute value of difference between the histogram of new rotated image and histogram of each image in dataset (region with 460 × 460 pixels around the center of image) is calculated, and we sum the results of different points to obtain a value for each test image to each image of the dataset. Now based on the situations, α in Equation (19) takes different values. If the result of difference between the histograms is less than 28,000, α = 0.25 else α = 1. Using the exponential function, we obtain similarity results in range of [0 1] for each test image.
We applied the proposed algorithm on a database including 80 subjects, 40 images from DRIVE (565 × 584 pixels) , and 40 images from STARE (720 × 576 pixels)  database. We rotated randomly each image six times to obtain 480 images. We evaluated the performance of our recognition method in five different experiments as follows. In identification mode, at first step, the security system must decide about the existence of test image (person) in dataset. After confirming the existence, the security system must determine the identity of test person. For verification mode, the system confirms or denies a person's claimed identity. Then, if the similarity between the retina image of test person and retina image of person that test person claimed his (her) is larger than a predetermined threshold, the system confirms the person's claimed identity. For identification mode, all steps explained in Section 6.1 and 6.2 will be done. Then, similarity between the retina image of test person and all retina images in dataset is determined. Then, if the maximum similarity is more than predetermined threshold, it shows that the test images is one of retina images in dataset, and the image in dataset that causes the maximum similarity is the identity of test image. To obtain the threshold, we did a global scanning of different thresholds, and the best threshold for identification and verification mode is equal to 0.3.
6.3.1 Experiment A
The first 30 images of DRIVE database and the first 30 images of STARE were enrolled, and 80 images of DRIVE and STARE databases with six images per subject were entered to the system as queries.
6.3.2 Experiment B
The last 30 images of DRIVE database and the last 30 images of STARE were enrolled, and 80 images of DRIVE and STARE databases with six images per subject were entered to the system as queries.
6.3.3 Experiment C
The first 20 images of DRIVE database and the first 20 images of STARE database were enrolled, and 80 images from DRIVE and STARE databases with six images per subject were entered to the system as queries.
6.3.4 Experiment D
The first 25 images of DRIVE database and the last 25 images of STARE database were enrolled, and 80 images of DRIVE and STARE databases with six images per subject were entered to the system as queries.
6.3.5 Experiment E
Results of different recognition methods
Number of images
Number of subjects
Identification or verification mode
Running time (s)
Dehghani et al. 
Farzin et al. 
Xu et al. 
Ortega et al. 
Tabatabaee et al. 
Shahnazi et al. 
Islam et al. 
Sukumaran et al. 
Barkhoda et al. 
Barkhoda et al. 
Proposed method (corners and similarity function)
In this paper, we proposed a new human recognition method based on corners and similarity function of retinal images without using vessel segmentation methods. The security systems based on their applications may perform in identification or verification mode. In the proposed method, in spite of other counterpart methods [8, 13, 14, 29–32], segmentation methods that increase computational time were not used. We compensated the rotation angle of head or eye movement (gaze angle) in front of the retina fundus camera using phase correlation technique which was not mentioned in the counterpart methods. To extract corners in the retina image, Harris corner detector and some other steps such as applying threshold and pruning algorithm were used. In some situations that there is pathological region in the retinal images, Harris corner detector may not be effective, and in these situations, other corner detectors that may be effective must be used.
Then, corners that their distance from the center of image was less than 190 pixels were used to decrease the computational time of the proposed method. Finally, we used a similarity function with some limitation on its coefficient for recognition. The superiority of similarity function used in this paper is that, we obtained the similarity between the points in each set of points. Then using the similarity function, we estimated the similarity between two sets of points. In spite of other similarity function used in counterpart method , the number of similar points in two sets of points did not play an important role for recognition. The success rate and number of images used in proposed method show the effectiveness of the proposed method in comparison to the counterpart methods.
This work was partially supported by Research Institute for ICT under grant no. T-500-4789.
- Dehghani A, Abrishami Moghaddam H, Moin M-S: Retinal Identification Based on Rotation Invariant Moments. In Proceedings of the 5th International Conference on Bioinformatics and Biomedical Engineering. Wuhan; 2011.Google Scholar
- Kovacs-Vajna ZM: A fingerprint verification system based on triangular matching and dynamic time warping. IEEE Trans Pattern Anal Mach Intell. 2000, 22: 1266-1276. 10.1109/34.888711View ArticleGoogle Scholar
- Kumar A, Zhang D: Hand geometry recognition using entropy-based discretization. IEEE Trans. Inf. Forensics Security. 2007, 2(2):181-187.View ArticleGoogle Scholar
- Anbarjafari G: Face recognition using color local binary pattern from mutually independent color channels. EURASIP J Image Video Process. 2013, 2013: 6. doi:10.1186/1687-5281-2013-6 10.1186/1687-5281-2013-6View ArticleGoogle Scholar
- Lin C-H, Chen J-L, Gaing Z-L: Combining Biometric Fractal Pattern and Particle Swarm Optimization-Based Classifier for Fingerprint Recognition. Math. Probl. Eng. 2010, 2010: 1-15. doi:10.1155/2010/328676Google Scholar
- Rossant F, Mikovicova B, Adam M, Trocan M: A robust iris identification system based on wavelet packet decomposition and local comparisons of the extracted signatures. EURASIP J Adv Signal Process. 2010, 2010: 415307. doi:10.1155/2010/415307 10.1155/2010/415307View ArticleGoogle Scholar
- Nakissa B: Shahram Moin M: A new user dependent iris recognition system based on an area preserving pointwise level set segmentation approach. EURASIP J Adv Signal Process. 2009, 2009: 980159. doi:10.1155/2009/980159Google Scholar
- Farzin H, Abrishami Moghaddam H, Moin M-S: A novel retinal identification system. EURASIP J Adv Signal Process. 2008, 2008: 280635. doi:10.1155/2008/280635 10.1155/2008/280635View ArticleGoogle Scholar
- Jain AK, Ross A, Prabhakar S: An introduction to biometric recognition. IEEE Trans. Circuits Syst. Video Technol. 2004, 14(1):4-20.View ArticleGoogle Scholar
- Hill RB: Retinal identification. Springer, Berlin: Biometrics: Personal Identification in Networked Society; 1999:126.Google Scholar
- Retica Systems. http://venturebeatprofiles.com/company/profile/retica-systems
- Iris recognition. http://en.wikipedia.org/wiki/Iris_recognition
- Xu Z-W, Guo X-X, Hu X-Y, Cheng X: The blood vessel recognition of ocular fundus. In Proceedings of the 4th International Conference on Machine Learning and Cybernetics. Guangzhou; 2005:4493-4498.Google Scholar
- Ortega M, Penedo MG, Rouco J, Barreira N, Carreira MJ: Retinal verification using a feature points-based biometric pattern. EURASIP J. Adv. Signal Process. 2009, 2009: 1-13.View ArticleGoogle Scholar
- Ortega M, Marino C, Penedo MG, Blanco M, Gonzalez F: Biometric authentication using digital retinal images. In Proceedings of the 5th WSEAS International Conference on Applied Computer Science. Hangzhou; 2006:422-427.Google Scholar
- Tabatabaee H, Milani-Fard A, Jafariani H: A Novel Human Identifier System Using Retina Image and Fuzzy Clustering Approach. In Proceedings of the 2nd IEEE International Conference on Information and Communication Technologies. Damascus; 2006:1031-1036.Google Scholar
- Shahnazi M, Pahlevanzadeh M, Vafadoost M: Wavelet based retinal recognition. In Proceedings of the 9th IEEE International Symposium on Signal Processing and Its Applications. Sharjah; 2007:1-4.Google Scholar
- Oinonen H, Forsvik H, Ruusuvuori P, Yli-Harja O, Voipio V, Huttunen H: Identity Verification Based on Vessel Matching from Fundus Images. In 17th International Conference on Image Processing. Hong Kong; 2010:4089-4092.Google Scholar
- Vijaya Kumari V, Suriyanarayanan N: Blood vessel extraction using wiener filter and morphological operation. Int J Comput Sci Emerg Tech. 2010, 1(4):7-10.Google Scholar
- Moravec HP: Towards Automatic Visual Obstacle Avoidance. In Proceedings of the Int’l Joint Conf Artificial Intelligence. Cambridge, MA; 1977:584.Google Scholar
- Harris C, Stephens M: A Combined Corner and Edge Detector. In Proceedings of the Alvey Vision Conference. University of Manchester; 1988:147-151.Google Scholar
- Noble A: Finding corners. Image Vis Comput J. 1988, 6(2):121-128. 10.1016/0262-8856(88)90007-8View ArticleGoogle Scholar
- Mokhtarian F, Suomela R: Robust image corner detection through curvature scale space. IEEE Trans Pattern Anal Mach Intell. 1998, 20(12):1376-1381. 10.1109/34.735812View ArticleGoogle Scholar
- Rosten E, Drummond T: Machine learning for high-speed corner detection. In Computer vision - ECCV 2006. Edited by: Leonardis A, Bischof H, Pinz A. Springer, Heidelberg: 9th European Conference on Computer Vision, Graz, 07–13 May 2006, Lecture Notes in Computer Science, vol. 3951; 2006:430-443.View ArticleGoogle Scholar
- Chen J, Zou LH, Zhang J, Dou LH: The comparison and application of corner detection algorithms. J Multimed. 2009, 4(6):435-441.View ArticleGoogle Scholar
- Lin L, Liu Y: Registration algorithm based on image matching for outdoor AR system with fixed viewing position. IEEE proc., Vis Image Signal Process. 2006, 153(1):57-62. 10.1049/ip-vis:20045181View ArticleGoogle Scholar
- Krish K, Heinrich S, Snyder WE, Cakir H, Khorram S: A new feature based image registration algorithm. In ASPRS Annual Convention. Portland, OR; 2008.Google Scholar
- Dehghani A: Abrishami Moghaddam H, Moin M-S: Optic disc localization in retinal images using histogram matching. EURASIP J Image Video Process. 2012, 1: 1-11.Google Scholar
- Dehghani A, Moin M-S, Saghafi M: Localization of the optic disc center in retinal images based on the Harris corner detector. Biomed. Eng. Lett. 2012, 2(3):198-206. 10.1007/s13534-012-0072-9View ArticleGoogle Scholar
- Islam MN, Siddiqui MA, Paul S: An efficient retina pattern recognition algorithm (RPRA) towards human identification. In Proceedings of the 2nd International Conference on Computer, Control and Communication. Karachi; 2009:1-6.Google Scholar
- Sukumaran S, Punithavalli M: Retina recognition based on fractal dimension. IJCSNS Int J Comput Sci and Netw Secur. 2009, 9(10):66-7.Google Scholar
- Barkhoda W, Akhlaqian Tab F, Deljavan Amiri M: Rotation invariant Retina identification based on the sketch of vessels using angular partitioning. In Proceedings of the International Multiconference on Computer Science and Information Technology. Mragowo; 2009:3-6.Google Scholar
- Barkhoda W, Akhlaqian F, Amiri M–D, Nouroozzadeh M-S: Retina identification based on the pattern of blood vessels using fuzzy logic. EURASIP J Adv Signal Process. 2011, 2011: 1-8.View ArticleGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.