Skip to main content

Human recognition based on retinal images and using new similarity function

Abstract

This paper presents a new human recognition method based on features extracted from retinal images. The proposed method is composed of some steps including feature extraction, phase correlation technique, and feature matching for recognition. In the proposed method, Harris corner detector is used for feature extraction. Then, phase correlation technique is applied to estimate the rotation angle of head or eye movement in front of a retina fundus camera. Finally, a new similarity function is used to compute the similarity between features of different retina images. Experimental results on a database, including 480 retinal images obtained from 40 subjects of DRIVE dataset and 40 subjects from STARE dataset, demonstrated an average true recognition accuracy rate equal to 100% for the proposed method. The success rate and number of images used in the proposed method show the effectiveness of the proposed method in comparison to the counterpart methods.

1. Introduction

Nowadays, reliable, fast, and accurate recognition of a person is an important need for security systems, and recent advances in technology and increasing the need for security require the use of reliable biometric systems [1]. The biometric attributes used in security system for recognition include finger print, hand geometry, face, iris, and retina [28]. Each biometric feature has its strengths and weaknesses, and the choice depends on the application. Among these methods, the retina provides a higher level of security for recognition due to uniqueness and the stability of the blood vessel pattern during one's life. However, because of technological limitation in manufacturing specific imaging apparatus, its usage is limited to places where high security level is needed. Depending on the application context, a biometric system may operate in verification or identification mode. In the verification mode, the system confirms or denies a person's claimed identity by comparing the captured biometric features with the own biometric template(s) stored in the database. In the identification mode, the system recognizes an individual by searching the templates of all users in the database for a match [9]. Therefore, for identification at first step, the security system must decide about the existence of a person. After confirming the existence, the security system must determine the identity of that person. Most recognition methods based on the retinal images are used only in identification or verification mode.

The first identification system using commercial retina scanner called EyeDentification 7.5 was proposed by EyeDentify Company in 1976 [10]. Some companies like Retica Systems Inc. are working on multi-modal security system using retina and iris for identification [11]. In Figure 1, an identification system based on iris scanner can be seen.

Figure 1
figure 1

Iris scanner [[12]]. This figure shows the identification systems based on iris scanner.

Recognition methods based on the retinal images are divided in two different classes. Some algorithms are based on the vessel segmentation results of retinal images. Therefore, the computational time of these methods is high. Some other algorithms are based on the features extracted from retinal images, and usually, the implementation time of these methods is low. Most of the last proposed methods are effective only for identification or verification mode. In this paper, a new identification and verification method based on the retina images is presented. The proposed method contains some steps including Harris corner detector for feature extraction, phase correlation technique to estimate the rotation angle of head or eye movement in front of the retina fundus camera, and a new similarity function to compute the similarity between the features of dataset and test images. At first step, Harris corner detector is applied as a fast detector for feature extraction. On the other hand, the most important problem in using retinal images for human recognition is head or eye movement in front of fundus camera. The last proposed methods did not mention any solution for head or eye movement that may make mistake in the results of human recognition based on the retina images. In this paper, phase correlation technique is used to estimate and compensate the rotation angle of head or eye movement in front of the fundus camera. Finally, using a new similarity function, the similarity of each test image to each retina image of the dataset is determined.

The rest of the paper is organized as follows. Section 2 is devoted to the review of the last proposed methods for human recognition using retina images. In Section 3, Harris corner detector is used for feature extraction. Section 4 contains phase correlation technique to compensate the rotation angle of head or eye movement in front of the retina fundus camera. Section 5 is devoted to similarity function for feature matching. Recognition method and evaluation results are presented in Section 6. Finally, conclusion remarks are given in the last section.

2. Review of the last proposed methods

Farzin et al. [7] proposed a novel method based on the features obtained from retinal images. This method was composed of three principal modules including blood vessel segmentation, feature generation, and feature matching. Blood vessel segmentation module had the role of extracting blood vessel pattern from retinal images. Feature generation module included optic disc detection and selecting circular region around the optic disc of the segmented image. Then, using a polar transformation, a rotation invariant template was created. Next, these templates were analyzed in three different scales using wavelet transform to separate vessels according to their diameter sizes. In the last stage, vessel position and orientation in each scale were used to define a feature vector for each subject in the database. For feature matching, they introduced a modified correlation measure to obtain a similarity index for each scale of the feature vector. Then, they computed the total value of the similarity index by summing scale-weighted similarity indices. The computational time of this method for segmentation, feature extraction, and matching is high, and this method was used only in identification mode.

Xu et al. [13] proposed a new method for recognition. They used the green grayscale ocular fundus image. At first step, the skeleton feature of optic fundus blood vessel using contrast-limited adaptive histogram equalization was extracted. After the filtering treatment and extracting shape feature, shape curve of blood vessels was obtained. Shape curve matching was later carried out by means of reference point matching. In their method for recognition, feature matching consisted of finding affine transformation parameters which relates the query image and its best corresponding enrolled image. The computational time of this algorithm is high because a number of rigid motion parameters should be computed for all possible correspondences between the query and enrolled images in the dataset.

Ortega et al. [14] used a fuzzy circular Hough transform to localize the optic disc in the retina images. Then, they defined feature vectors based on the ridge endings and bifurcations of vessels obtained from a crease model of the retinal vessels inside the optical disc. For matching, they used a similar approach as in [13] to compute the parameters of a rigid transformation between feature vectors which gives the highest matching score. This algorithm is computationally more efficient with respect to the algorithm presented in [13]. In [15], new methods were used to decrease the number of transformations instead of using points with specific distance from optic disc center. This method was used only in the verification mode.

Tabatabaee et al. [16] presented a new algorithm based on the fuzzy C-means clustering algorithm. They used Haar wavelet and snakes model for optic disc localization. The Fourier-Mellin transform coefficients and simplified moments of the retinal image was used as extracted features for system. The computational cost and implementation time of this algorithm are high, and the performance of the algorithm has been evaluated using a small dataset.

Shahnazi et al. [17] proposed a new method based on the wavelet energy feature (WEF) which is a powerful tool of multi-resolution analysis. WEF can reflect the wavelet energy distribution of the vessels with different thickness and width in several directions at different wavelet decomposition levels (scales), and thus, its ability to discriminate retinas is very strong. Simple computation was another virtue of WEF. Using semi-conductors and various environmental temperatures in electronic imaging systems cause noisy images, and thus, they used noisy retinal images for recognition. This method is based on the segmentation results of retinal images and was evaluated only on DRIVE dataset.

Oinonen et al. [18] proposed a novel method for verification based on minutiae features. The proposed method consisted of three steps: blood vessel segmentation, feature extraction, and feature matching. In practice, vessel segmentation can be viewed as a preprocessing phase for feature extraction. Then, vessel crossings and their orientation information were obtained. These data were matched with the corresponding ones from the comparison image. The method used the vessel direction information for improved matching robustness. The computational time of this method for segmentation, feature extraction, and matching is high, and this method was used in verification mode.

3. Feature extraction

Figure 2 shows the retina image, and it can be seen that the optic disc is the brightest region in the retina images [19]. Because of the existence of rotation in the retina image of a person in different situations and images, rotation invariant features are the best features for recognition. Corners are rotation invariant, and they can be used as features for recognition. Therefore, corner detector method is used for feature extraction.

Figure 2
figure 2

Retina image.

3.1 Corner detection algorithms

We consider the corner as the intersection of two edges or as a point that there are two dominant and different edge directions in its local neighborhood. There are several methods for corner detection [2024]. Harris corner detector is one of the most well-known corner detectors and is based on measuring the corner strength. Comparison between Harris corner detector and other corner detectors like SUSAN shows the superiority of Harris corner detector in the case of stability and complexity (running time) [25]. Therefore, Harris corner detector is used for feature extraction.

3.2 Harris algorithm

In the Harris corner detector, a window is slided on the image, and the change of intensity resulted from the sliding window is determined [21]. Therefore, the change E produced by displacement (x,y) in the image intensity I can be calculated using the following equation:

E x , y = u v w u , v I x + u , y + v - I u , v 2 ,
(1)

where w is the moving window. Harris corner detector looks for local maxima in min{E} found over all considered moving directions [21]. Now, the amount of displacement is expressed using the following equation:

E x , y = u v w u , v I x + u , y + v - I u , v 2 = u v w u , v xX + yY + O x 2 , y 2 2 ,
(2)

where

X = I * - 1 , 0 , 1 I x Y = I * - 1 , 0 , 1 T I y .
(3)

Other operators such as Sobel or Perwitt can be used for the gradient function in the above equation. Now for small displacement, E can be expressed as:

E x , y = A x 2 + 2 Cxy + B y 2 ,
(4)

where

A = X 2 * w B = Y 2 * w C = XY * w .
(5)

To reduce the effect of the noise in the retina image, Gaussian window in the following form is used:

w u , v = e u 2 + v 2 2 σ 2 .
(6)

Finally, we can express E as:

E x , y = x , y M x , y T ,
(7)

where M is a symmetric matrix in following form:

M = A C C B .
(8)

E is closely related to the local autocorrelation function. Let α and β be the eigenvalues of M. Therefore, there are three situations. In Figure 3, all regions such as edges, corners, and regions with uniform intensity are described with respect to eigenvalues. In Figure 3, based on the (α, β) space, edges are in the regions where one of the eigenvalues is very small and the other one is very large. Corners are in the regions where both eigenvalues are large, and in flat regions, both eigenvalues are small [21]. Harris and Stephens proposed the following corner measure:

R x , y = Det M - k Tr M 2 Det M = α × β Tr M = α + β .
(9)
Figure 3
figure 3

Autocorrelation principal curvature space. Heavy lines give corner, edge, and flat classification, and fine lines are equi-response contours [21].

Contours of R are shown in Figure 3. R is positive in the corner region, negative in the edge region, and small in the flat region. Now, we apply threshold on the R to find the corners. It is important to obtain only one corner for some points obtained as corners and is close to each other. Therefore, the pruning algorithm is used. The proposed method for feature extraction is summarized as follows:Up to now, we localized the corners in the retina image as features. In Figure 4, corners in some retina images are shown (k = 0.17 and Th = 7 × 104 and the size of window used in the sixth step is 7 × 7).

Figure 4
figure 4

Corners obtained using Harris algorithm for two retina images (a,b).

  • Step 1: Applying the gradient function in x and y direction.

  • Step 2: The matrices A, B, and C in Equation (5) are calculated.

  • Step 3: The convolutions of A, B, and C with Gaussian window in Equation (6) are calculated.

  • Step 4: R in Equation (9) is obtained.

  • Step 5: Apply thresholding (Th) on the corners to eliminate the points which are not corners.

  • Step 6: Finally, for pruning the corners, a window is used around each point obtained after applying the threshold. Then, for each window, the point with the maximum value is considered as a corner.

4. Rotation compensation

One of the most important problems in using retina images for recognition is the head or eye movement in front of the fundus camera. Therefore, a method based on the Fourier-Mellin transform is used to estimate the rotation angle of head or eye movement [26].

4.1 Phase correlation technique

Suppose that f s and f r are the reference and input images that differ only by a displacement (x 0, y 0). Therefore,

f s x , y = f r x - x 0 , y - y 0 .
(10)

If F s and F r are the Fourier transforms of f s and f r, we may write:

F s u , v = e - j 2 π u x 0 + v y 0 F r u , v .
(11)

The cross-spectrum of F s and F r is defined as:

R = F r u , v F s * u , v F r u , v F s u , v = e j 2 π u x 0 + v y 0 ,
(12)

where F* is the complex conjugate of F. By taking the inverse Fourier transform of R, an impulse function is obtained:

F - 1 R = δ x - x 0 , y - y 0 .
(13)

By detecting the peak of δ(x - x 0, y - y 0), the translation parameters (x 0, y 0) can be acquired.

4.2 Phase correlation technique for detection of translation and rotation

If f s(x, y) is a rotated replica of f r(x, y) with rotation angle θ 0 and translation (x 0, y 0), then

f s x , y = f r x cos θ 0 - y sin θ 0 - x 0 , x sin θ 0 + y cos θ 0 - y 0 .
(14)

According to the property of Fourier transformation, the Fourier transforms F s and F r have the following relationship:

F s u , v = e - j 2 π u x 0 + v y 0 × F r u cos θ 0 - v sin θ 0 , u sin θ 0 + v cos θ 0 .
(15)

This means |F s | is also a rotated replica of |F r | with rotation angle θ 0 and without translation. The rotation angle can be obtained by representing the |F s | and |F r | with polar coordinates as shown in Equation (16):

F s ρ , θ = F r ρ , θ - θ 0 .
(16)

Then, the angle θ 0 can be obtained by using phase correlation. The proposed method for estimation of the rotation angle is shown in Figure 5.

Figure 5
figure 5

Phase correlation technique. The proposed method for estimation the rotation angle is shown.

5. Similarity function

For recognition based on the corners obtained from vessel crossings, a new similarity function is used to match the features in dataset and test images. In contrary to other methods such as proposed method in [14], instead of finding the similar points in two sets of points, at first step, a model that describes the relations between points in each set of points or image is used without needing to segment vessels in retina images. Then, we obtain the similarity between the model functions of different images to calculate the similarity between the images. Therefore, in this method, the number of similar points in two different images does not play an important role in similarity results between two different images because the similarity between the model function of different images is determined for matching and recognition.

5.1 Model construction

To match interest points in different images, at first step, a model that describes the similarity between the points of each image is used [27]. Consider a set of J interest points in the source image as follows:

S F = S F 1 , S F 2 , , S F j , , S F J S F j = x , y .
(17)

A rotation invariant coordinate system is used at each interest point S F j . Then, a set of J feature vectors v jk = (r jk , θ jk ) at each point j is formulated. (r jk , θ jk ) is the polar coordinate of the point with respect to the invariant coordinate system at the interest point S F j .

r jk = r j - r k θ jk = θ j - θ k k = 1 , , J
(18)

r jk is the distance between two interest points, S F j and S F k . θ jk corresponds to the orientation of interest point S F j to S F k . The similarity model between an unknown vector v and the feature vectors at point S F j is given by:

S M v v j = k = 1 J Max exp - α × v - v jk 2 .
(19)

The function model S M j v describes the closeness and similarity of a given feature vector (v) to all the feature vectors around the interest point S F j and α is a constant that takes different value. Therefore, using the Equation (18), we obtain a model that describes the closeness and relation between the corners in each set of points to obtain their similarities. Finally, Equation (19) is used to calculate the similarity between model function of different images obtained by Equation (18).

5.2 Matching

Assume that the source image (or reference image) has a set of J interest points as S F = S F 1 , S F 2 , , S F j , , S F J and S F j = x , y , and using Equation (18), we compose the matrix that describes the distance and orientation of points in S F to each other. Let the target image have M interest points, T F = T F 1 , T F 2 , , T F M and T F m = x , y , and using the Equation (18), we compose the matrix that describes the distance and orientation of points in S T to each other. The matching between two sets of points (S F and T F ) will be calculated using the following equations:

Matching = i = 1 J A i A i = Max S M v F i v T j j = 1 M .
(20)

Therefore, using the above equations, the similarity of two retina images is calculated. For matching, we have two matrices, and we find the similarity between each element in the first matrix (obtained from the similarity of corners in first retina image) to all elements in the second matrix (obtained from the similarity of corners in the second retina image). The best match for the source image is when:

Matching > T ,
(21)

where T is a predetermined thresholding and is set to 0.3. In this paper, to decrease the computational time, only r jk is used to compose model function.

6. Proposed method

Previous sections were devoted to feature extraction (using corner detector) and feature matching methods. This section explains the proposed method for recognition.

At the first step, for images of the dataset, the Harris corner detector is applied to extract corners as features. The parameters used in the Harris corner detector are k = 0.17, Th = 7×104, and a 7 × 7 window for pruning the corners, and corners with distance from the center of image is less than 190 pixels are used for recognition. Then, similarity model explained in Section 5 is used to formulate a set of vectors for each retina image.

6.1 Test images

As we know, the most important problem in using retinal images for recognition is head or eye movement in front of the fundus camera. We estimate the rotation angle of test image to each image in dataset images using phase correlation technique. Then, for each test image, we rotate it as the negative angle obtained using phase correlation technique to each dataset image. For the new image, we apply Harris corner detector and to decrease the computational cost; corners with distance from the center of image is less than 190 pixels are used. Finally, we use the similarity model explained to obtain the similarity between the corners. Therefore, for each test image, some sets of similarity vectors and model functions based on the number of images in dataset are obtained.

6.2 Recognition method

Our recognition method contains two steps. For each test image, we determined its rotation angle to each image in dataset, and we rotate the test image as negative angle obtained. Then, a region with 460 × 460 pixels around the center of new image is extracted. The absolute value of difference between the histogram of new rotated image and histogram of each image in dataset (region with 460 × 460 pixels around the center of image) is calculated, and we sum the results of different points to obtain a value for each test image to each image of the dataset. Now based on the situations, α in Equation (19) takes different values. If the result of difference between the histograms is less than 28,000, α = 0.25 else α = 1. Using the exponential function, we obtain similarity results in range of [0 1] for each test image.

6.3 Experiments

We applied the proposed algorithm on a database including 80 subjects, 40 images from DRIVE (565 × 584 pixels) [28], and 40 images from STARE (720 × 576 pixels) [29] database. We rotated randomly each image six times to obtain 480 images. We evaluated the performance of our recognition method in five different experiments as follows. In identification mode, at first step, the security system must decide about the existence of test image (person) in dataset. After confirming the existence, the security system must determine the identity of test person. For verification mode, the system confirms or denies a person's claimed identity. Then, if the similarity between the retina image of test person and retina image of person that test person claimed his (her) is larger than a predetermined threshold, the system confirms the person's claimed identity. For identification mode, all steps explained in Section 6.1 and 6.2 will be done. Then, similarity between the retina image of test person and all retina images in dataset is determined. Then, if the maximum similarity is more than predetermined threshold, it shows that the test images is one of retina images in dataset, and the image in dataset that causes the maximum similarity is the identity of test image. To obtain the threshold, we did a global scanning of different thresholds, and the best threshold for identification and verification mode is equal to 0.3.

6.3.1 Experiment A

The first 30 images of DRIVE database and the first 30 images of STARE were enrolled, and 80 images of DRIVE and STARE databases with six images per subject were entered to the system as queries.

6.3.2 Experiment B

The last 30 images of DRIVE database and the last 30 images of STARE were enrolled, and 80 images of DRIVE and STARE databases with six images per subject were entered to the system as queries.

6.3.3 Experiment C

The first 20 images of DRIVE database and the first 20 images of STARE database were enrolled, and 80 images from DRIVE and STARE databases with six images per subject were entered to the system as queries.

6.3.4 Experiment D

The first 25 images of DRIVE database and the last 25 images of STARE database were enrolled, and 80 images of DRIVE and STARE databases with six images per subject were entered to the system as queries.

6.3.5 Experiment E

Forty images of DRIVE database and 40 images of STARE database were enrolled, and 80 images from DRIVE and STARE databases with six images per subject were entered to the system as queries. Some of the retina images used in the propose method are shown in Figure 6. In Table 1, we can see the results of proposed method for different experiments.

Figure 6
figure 6

Some of the retinal images in DRIVE and STARE dataset (a,b,c,d).

Table 1 Experimental results

These experiments demonstrated that the security system has an average accuracy equal to 100%. The proposed method on experiments performed in [8] has an average accuracy equal to 100% (these experiments are like experiments A to D with some difference in the dataset and also the number of images used for enrollment). Figure 7 shows the variation of FRR (false rejection rate) and FAR (false acceptance rate) for different thresholdings in the final step of matching, when difference between the histograms is 5 × 104.

Figure 7
figure 7

FAR-FRR for different threshold in final step of matching, when difference between the histograms is 5 × 10 4 .

We can see from the Figure 7 that the proposed method is not sensitive to threshold in the range of [0.2 0.3]. Therefore, we can use thresholding for the final step in the range of [0.2 0.3]. In Figure 8, human recognition results based on the different values of k in Harris corner detector when the thresholding value of histograms is 5 × 104 are shown. Therefore, it can be seen that the proposed method is not sensitive for k in the range of [0.15 0.2]. Now for better comparison, the results of counterpart methods for recognition and also dataset used are presented in Table 2. Most of counterpart methods can be used only in identification or verification mode, but the proposed method can be used in both identification and verification mode and its accuracy rate in these two modes is 100%.

Figure 8
figure 8

Human recognition result based on the different values of k in Harris corner detector. When the thresholding value for histograms is 5 × 104.

Table 2 Results of different recognition methods

7. Conclusions

In this paper, we proposed a new human recognition method based on corners and similarity function of retinal images without using vessel segmentation methods. The security systems based on their applications may perform in identification or verification mode. In the proposed method, in spite of other counterpart methods [8, 13, 14, 2932], segmentation methods that increase computational time were not used. We compensated the rotation angle of head or eye movement (gaze angle) in front of the retina fundus camera using phase correlation technique which was not mentioned in the counterpart methods. To extract corners in the retina image, Harris corner detector and some other steps such as applying threshold and pruning algorithm were used. In some situations that there is pathological region in the retinal images, Harris corner detector may not be effective, and in these situations, other corner detectors that may be effective must be used.

Then, corners that their distance from the center of image was less than 190 pixels were used to decrease the computational time of the proposed method. Finally, we used a similarity function with some limitation on its coefficient for recognition. The superiority of similarity function used in this paper is that, we obtained the similarity between the points in each set of points. Then using the similarity function, we estimated the similarity between two sets of points. In spite of other similarity function used in counterpart method [14], the number of similar points in two sets of points did not play an important role for recognition. The success rate and number of images used in proposed method show the effectiveness of the proposed method in comparison to the counterpart methods.

References

  1. Dehghani A, Abrishami Moghaddam H, Moin M-S: Retinal Identification Based on Rotation Invariant Moments. In Proceedings of the 5th International Conference on Bioinformatics and Biomedical Engineering. Wuhan; 2011.

    Google Scholar 

  2. Kovacs-Vajna ZM: A fingerprint verification system based on triangular matching and dynamic time warping. IEEE Trans Pattern Anal Mach Intell. 2000, 22: 1266-1276. 10.1109/34.888711

    Article  Google Scholar 

  3. Kumar A, Zhang D: Hand geometry recognition using entropy-based discretization. IEEE Trans. Inf. Forensics Security. 2007, 2(2):181-187.

    Article  Google Scholar 

  4. Anbarjafari G: Face recognition using color local binary pattern from mutually independent color channels. EURASIP J Image Video Process. 2013, 2013: 6. doi:10.1186/1687-5281-2013-6 10.1186/1687-5281-2013-6

    Article  Google Scholar 

  5. Lin C-H, Chen J-L, Gaing Z-L: Combining Biometric Fractal Pattern and Particle Swarm Optimization-Based Classifier for Fingerprint Recognition. Math. Probl. Eng. 2010, 2010: 1-15. doi:10.1155/2010/328676

    Google Scholar 

  6. Rossant F, Mikovicova B, Adam M, Trocan M: A robust iris identification system based on wavelet packet decomposition and local comparisons of the extracted signatures. EURASIP J Adv Signal Process. 2010, 2010: 415307. doi:10.1155/2010/415307 10.1155/2010/415307

    Article  Google Scholar 

  7. Nakissa B: Shahram Moin M: A new user dependent iris recognition system based on an area preserving pointwise level set segmentation approach. EURASIP J Adv Signal Process. 2009, 2009: 980159. doi:10.1155/2009/980159

    Google Scholar 

  8. Farzin H, Abrishami Moghaddam H, Moin M-S: A novel retinal identification system. EURASIP J Adv Signal Process. 2008, 2008: 280635. doi:10.1155/2008/280635 10.1155/2008/280635

    Article  Google Scholar 

  9. Jain AK, Ross A, Prabhakar S: An introduction to biometric recognition. IEEE Trans. Circuits Syst. Video Technol. 2004, 14(1):4-20.

    Article  Google Scholar 

  10. Hill RB: Retinal identification. Springer, Berlin: Biometrics: Personal Identification in Networked Society; 1999:126.

    Google Scholar 

  11. Retica Systems. http://venturebeatprofiles.com/company/profile/retica-systems

  12. Iris recognition. http://en.wikipedia.org/wiki/Iris_recognition

  13. Xu Z-W, Guo X-X, Hu X-Y, Cheng X: The blood vessel recognition of ocular fundus. In Proceedings of the 4th International Conference on Machine Learning and Cybernetics. Guangzhou; 2005:4493-4498.

    Google Scholar 

  14. Ortega M, Penedo MG, Rouco J, Barreira N, Carreira MJ: Retinal verification using a feature points-based biometric pattern. EURASIP J. Adv. Signal Process. 2009, 2009: 1-13.

    Article  Google Scholar 

  15. Ortega M, Marino C, Penedo MG, Blanco M, Gonzalez F: Biometric authentication using digital retinal images. In Proceedings of the 5th WSEAS International Conference on Applied Computer Science. Hangzhou; 2006:422-427.

    Google Scholar 

  16. Tabatabaee H, Milani-Fard A, Jafariani H: A Novel Human Identifier System Using Retina Image and Fuzzy Clustering Approach. In Proceedings of the 2nd IEEE International Conference on Information and Communication Technologies. Damascus; 2006:1031-1036.

    Google Scholar 

  17. Shahnazi M, Pahlevanzadeh M, Vafadoost M: Wavelet based retinal recognition. In Proceedings of the 9th IEEE International Symposium on Signal Processing and Its Applications. Sharjah; 2007:1-4.

    Google Scholar 

  18. Oinonen H, Forsvik H, Ruusuvuori P, Yli-Harja O, Voipio V, Huttunen H: Identity Verification Based on Vessel Matching from Fundus Images. In 17th International Conference on Image Processing. Hong Kong; 2010:4089-4092.

    Google Scholar 

  19. Vijaya Kumari V, Suriyanarayanan N: Blood vessel extraction using wiener filter and morphological operation. Int J Comput Sci Emerg Tech. 2010, 1(4):7-10.

    Google Scholar 

  20. Moravec HP: Towards Automatic Visual Obstacle Avoidance. In Proceedings of the Int’l Joint Conf Artificial Intelligence. Cambridge, MA; 1977:584.

    Google Scholar 

  21. Harris C, Stephens M: A Combined Corner and Edge Detector. In Proceedings of the Alvey Vision Conference. University of Manchester; 1988:147-151.

    Google Scholar 

  22. Noble A: Finding corners. Image Vis Comput J. 1988, 6(2):121-128. 10.1016/0262-8856(88)90007-8

    Article  Google Scholar 

  23. Mokhtarian F, Suomela R: Robust image corner detection through curvature scale space. IEEE Trans Pattern Anal Mach Intell. 1998, 20(12):1376-1381. 10.1109/34.735812

    Article  Google Scholar 

  24. Rosten E, Drummond T: Machine learning for high-speed corner detection. In Computer vision - ECCV 2006. Edited by: Leonardis A, Bischof H, Pinz A. Springer, Heidelberg: 9th European Conference on Computer Vision, Graz, 07–13 May 2006, Lecture Notes in Computer Science, vol. 3951; 2006:430-443.

    Chapter  Google Scholar 

  25. Chen J, Zou LH, Zhang J, Dou LH: The comparison and application of corner detection algorithms. J Multimed. 2009, 4(6):435-441.

    Article  Google Scholar 

  26. Lin L, Liu Y: Registration algorithm based on image matching for outdoor AR system with fixed viewing position. IEEE proc., Vis Image Signal Process. 2006, 153(1):57-62. 10.1049/ip-vis:20045181

    Article  Google Scholar 

  27. Krish K, Heinrich S, Snyder WE, Cakir H, Khorram S: A new feature based image registration algorithm. In ASPRS Annual Convention. Portland, OR; 2008.

    Google Scholar 

  28. Dehghani A: Abrishami Moghaddam H, Moin M-S: Optic disc localization in retinal images using histogram matching. EURASIP J Image Video Process. 2012, 1: 1-11.

    Google Scholar 

  29. Dehghani A, Moin M-S, Saghafi M: Localization of the optic disc center in retinal images based on the Harris corner detector. Biomed. Eng. Lett. 2012, 2(3):198-206. 10.1007/s13534-012-0072-9

    Article  Google Scholar 

  30. Islam MN, Siddiqui MA, Paul S: An efficient retina pattern recognition algorithm (RPRA) towards human identification. In Proceedings of the 2nd International Conference on Computer, Control and Communication. Karachi; 2009:1-6.

    Google Scholar 

  31. Sukumaran S, Punithavalli M: Retina recognition based on fractal dimension. IJCSNS Int J Comput Sci and Netw Secur. 2009, 9(10):66-7.

    Google Scholar 

  32. Barkhoda W, Akhlaqian Tab F, Deljavan Amiri M: Rotation invariant Retina identification based on the sketch of vessels using angular partitioning. In Proceedings of the International Multiconference on Computer Science and Information Technology. Mragowo; 2009:3-6.

    Google Scholar 

  33. Barkhoda W, Akhlaqian F, Amiri M–D, Nouroozzadeh M-S: Retina identification based on the pattern of blood vessels using fuzzy logic. EURASIP J Adv Signal Process. 2011, 2011: 1-8.

    Article  Google Scholar 

Download references

Acknowledgments

This work was partially supported by Research Institute for ICT under grant no. T-500-4789.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Amin Dehghani.

Additional information

Competing interests

The authors declare that they have no competing interests.

Amin Dehghani, Hamid Abrishami Moghddam contributed equally to this work.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Dehghani, A., Ghassabi, Z., Moghddam, H.A. et al. Human recognition based on retinal images and using new similarity function. J Image Video Proc 2013, 58 (2013). https://doi.org/10.1186/1687-5281-2013-58

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-5281-2013-58

Keywords